Diffusion giant language fashions (dLLMs) are compelling options to autoregressive (AR) fashions as a result of their denoising fashions function over your entire sequence. The worldwide planning and iterative refinement options of dLLMs are notably helpful for code technology. Nonetheless, present coaching and inference mechanisms for dLLMs in coding are nonetheless under-explored. To demystify the decoding habits of dLLMs and unlock their potential for coding, we systematically examine their denoising processes and reinforcement studying (RL) strategies. We prepare a 7B dLLM, textbf{DiffuCoder}, on 130B tokens of code. Utilizing this mannequin as a testbed, we analyze its decoding habits, revealing the way it differs from that of AR fashions: (1) dLLMs can determine how causal their technology ought to be with out counting on semi-AR decoding, and (2) rising the sampling temperature diversifies not solely token selections but additionally their technology order. This variety creates a wealthy search area for RL rollouts. For RL coaching, to scale back the variance of token log-likelihood estimates and keep coaching effectivity, we suggest textbf{coupled-GRPO}, a novel sampling scheme that constructs complementary masks noise for completions utilized in coaching. In our experiments, coupled-GRPO considerably improves DiffuCoder’s efficiency on code technology benchmarks (+4.4% on EvalPlus) and reduces reliance on AR bias throughout decoding. Our work supplies deeper perception into the equipment of dLLM technology and affords an efficient, diffusion-native RL coaching framework.
- †The College of Hong Kong (HKU)
- ** Work carried out whereas at Apple







