Giant language fashions excel with reinforcement studying (RL), however totally unlocking this potential requires a mid-training stage. An efficient mid-training part ought to determine a compact set of helpful actions and allow quick choice amongst them by on-line RL. We formalize this instinct by presenting the primary theoretical consequence on how mid-training shapes post-training: it characterizes an motion subspace that minimizes each the worth approximation error from pruning and the RL error throughout subsequent planning. Our evaluation reveals two key determinants of mid-training effectiveness: pruning effectivity, which shapes the prior of the preliminary RL coverage, and its influence on RL convergence, which governs the extent to which that coverage might be improved through on-line interactions. These outcomes counsel that mid-training is simplest when the choice area is compact and the efficient horizon is brief, highlighting the significance of working within the area of motion abstractions fairly than primitive actions. Constructing on these insights, we suggest Reasoning as Motion Abstractions (RA3), a scalable mid-training algorithm. Particularly, we derive a sequential variational decrease certain and optimize it by iteratively discovering temporally-consistent latent constructions through RL, adopted by fine-tuning on the bootstrapped knowledge. Experiments on code technology duties exhibit the effectiveness of our strategy. Throughout a number of base fashions, RA3 improves the typical efficiency on HumanEval and MBPP by 8 and 4 factors over the bottom mannequin and the next-token prediction baseline. Moreover, RA3 achieves quicker convergence and better asymptotic efficiency in RLVR on HumanEval+, MBPP+, LiveCodeBench, and Codeforces.
- †Northwestern College
- ‡ College of Illinois Urbana–Champaign (UIUC)
- ** Work executed whereas at Apple






