ReasonGen-R1:
Cot for Autoregressive Image generation models through SFT and RL

Yu Zhang1*    Yunqi Li1*    Yifan Yang2†    Rui Wang3    Yuqing Yang2    Qi Dai2    Jianmin Bao2    Dongdong Chen2    Chong Luo2    Lili Qiu2   
1 Shanghaitech University         2 Microsoft Corporation         3 Fudan University        
*Equal contribution. This work was completed during the internships of Yu Zhang and Yunqi Li at Microsoft Research Asia.

Corresponding to: yifanyang@microsoft.com
Teaser image

Abstract

Although chain-of-thought (CoT) reasoning and reinforcement learning (RL) have driven breakthroughs in NLP, their integration into generative vision models remains underexplored. We introduce ReasonGen-R1, a two-stage framework that first imbues an autoregressive image generator with explicit text-based "thinking" skills via supervised fine-tuning (SFT) on a newly generated reasoning dataset of written rationales, and then refines its outputs using Group Relative Policy Optimization (GRPO). To enable the model to reason through text before generating images, We automatically generate and release a corpus of model-crafted rationales paired with visual prompts, enabling controlled planning of object layouts, styles, and scene compositions. Our GRPO algorithm uses reward signals from a pretrained vision–language model to assess overall visual quality, optimizing the policy in each update. Evaluations on Geneval, DPG, and the T2I benchmark demonstrate that ReasonGen-R1 consistently outperforms strong baselines and prior state-of-the-art models. We will open-source our generated reasoning dataset and training code to accelerate further advances in text-based reasoning–driven image generation.

Teaser image

Left: We show side-by-side visualizations of images generated by Janus-Pro-7B and ReasonGen-R1 using identical prompts. Right: we present a performance comparison across three instruction-following benchmarks. In every case, ReasonGen-R1 outperforms the base Janus-Pro-7B model, demonstrating a substantial improvement in its ability to follow instructions.

Highlights

  • We validate a novel training paradigm that integrates Chain-of-Thought supervised fine-tuning (CoT SFT) with GRPO reinforcement learning, leveraging multimodal LLMs as reward models to effectively enable models to “think and generate.”
  • We construct a large-scale CoT image-generation dataset, providing a valuable resource for future research and development.
  • Extensive experiments demonstrate the efficiency and effectiveness of the proposed ReasonGen-R1 framework across multiple benchmarks.

Generated Samples

BibTeX

@misc{zhang2025reasongenr1cotautoregressiveimage,
      title={ReasonGen-R1: CoT for Autoregressive Image generation models through SFT and RL}, 
      author={Yu Zhang and Yunqi Li and Yifan Yang and Rui Wang and Yuqing Yang and Dai Qi and Jianmin Bao and Dongdong Chen and Chong Luo and Lili Qiu},
      year={2025},
      eprint={2505.24875},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2505.24875}, 
}
}