LightningRL

Diffusion Large Language Models with a SOTA Accuracy–Parallelism Trade-off

ICML 2026 Paper on arXiv GitHub Code Hugging Face Model

We introduce LightningRL, a reinforcement learning post-training framework for block-wise diffusion Large Language Models (dLLMs) that breaks the accuracy–parallelism trade-off. Applied to SDAR-8B, LightningRL achieves 7.32 average TPF and 497.9 AUP — simultaneously improving both generation quality and inference speed.

  • LightningRL-8B-32b-MATH500, LightningRL-8B-32b-GSM8K, LightningRL-8B-32b-MBPP, and LightningRL-8B-32b-HumanEval are task-specific variants fine-tuned with different reward weight configurations for targeted deployment.

Citation

@misc{hu2026lightningrlbreakingaccuracyparallelismtradeoff,
      title={LightningRL: Breaking the Accuracy-Parallelism Trade-off of Block-wise dLLMs via Reinforcement Learning}, 
      author={Yanzhe Hu and Yijie Jin and Pengfei Liu and Kai Yu and Zhijie Deng},
      year={2026},
      eprint={2603.13319},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2603.13319}, 
}
Downloads last month
24
Safetensors
Model size
8B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including SJTU-DENG-Lab/LightningRL-8B-b32-MATH500

Paper for SJTU-DENG-Lab/LightningRL-8B-b32-MATH500