Peter Zhang
Jul 10, 2025 06:07
NVIDIA introduces NeMo-RL, an open-source library for reinforcement learning, enabling scalable training with GRPO and integration with Hugging Face models.
NVIDIA has unveiled NeMo-RL, a cutting-edge open-source library designed to enhance reinforcement learning (RL) capabilities, according to NVIDIA’s official blog. The library supports scalable model training, ranging from single-GPU prototypes to massive thousand-GPU deployments, and integrates seamlessly with popular frameworks like Hugging Face.
NeMo-RL’s Architecture and Features
NeMo-RL is a part of the broader NVIDIA NeMo Framework, known for its versatility and high-performance capabilities. The library includes native integration with Hugging Face models, optimized training, and inference processes. It supports popular RL algorithms such as DPO and GRPO and employs Ray-based orchestration for efficiency.
The architecture of NeMo-RL is designed with flexibility in mind. It supports various training and rollout backends, ensuring that high-level algorithm implementations remain agnostic to backend specifics. This design allows for the seamless scaling of models without the need for algorithm code modifications, making it ideal for both small-scale and large-scale deployments.
Implementing DeepScaleR with GRPO
The blog post explores the application of NeMo-RL to reproduce a DeepScaleR-1.5B recipe using the Group Relative Policy Optimization (GRPO) algorithm. This involves training high-performing reasoning models, such as Qwen-1.5B, to compete with OpenAI’s O1 benchmark on the AIME24 academic math challenge.
The training process is structured in three steps, each increasing the maximum sequence length used: starting at 8K, then 16K, and finally 24K. This gradual increase helps manage the distribution of rollout sequence lengths, optimizing the training process.
Training Process and Evaluation
The training setup involves cloning the NeMo-RL repository and installing necessary packages. Training is conducted in phases, with the model evaluated continuously to ensure performance benchmarks are met. The results demonstrated that NeMo-RL achieved a training reward of 0.65 in only 400 steps.
Evaluation on the AIME24 benchmark showed that the trained model surpassed OpenAI O1, highlighting the effectiveness of NeMo-RL when combined with the GRPO algorithm.
Getting Started with NeMo-RL
NeMo-RL is available for open-source use, providing detailed documentation and example scripts on its GitHub repository. This resource is ideal for those looking to experiment with reinforcement learning using scalable and efficient methods.
The library’s integration with Hugging Face and its modular design make it a powerful tool for researchers and developers seeking to leverage advanced RL techniques in their projects.
Image source: Shutterstock