verifiers icon indicating copy to clipboard operation
verifiers copied to clipboard

Add Vision / VLM models to environments and GRPO trainer

Open UlrickBL opened this issue 5 months ago • 5 comments

Description

This Pull Request introduces support for Vision-Language Models (VLMs) into the environments and the GRPO trainer. This functionality is implemented by tracking pixel values and image grids as the base inputs, and then transforming images into Base64 format to comply with VLLM/OpenAI chat formats. The implementation is adapted to work with both standard text tokenizers and multimodal/mixin processors. It also adds Image and Answer login in wanDB table to simplify data analysis during training.

The motivation for adding VLM support is strategic: I believe Vision-Language environments are critical for advancing AGI and Reinforcement Learning (RL) research. This feature was necessary to begin testing several promising, high-value environments.

Type of Change

  • [ ] Bug fix (non-breaking change which fixes an issue)
  • [x] New feature (non-breaking change which adds functionality)
  • [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • [ ] Documentation update
  • [ ] Test improvement

Testing

  • [x] All existing tests pass
  • [ ] New tests have been added to cover the changes
  • [x] Tests have been run locally with uv run pytest

It was end to end tested with 3 Prime-RL envs :

OCR VL with Qwen VL 2.5 3B and 7B : https://app.primeintellect.ai/dashboard/environments/ulrick-bl/ocr-vl (single turn image)

Rebus VL Thinking with Qwen VL 2.5 7B : https://app.primeintellect.ai/dashboard/environments/ulrick-bl/rebus-vl-thinking (single turn image)

Semantix with Qwen 2.5 0.5B : https://app.primeintellect.ai/dashboard/environments/ulrick-bl/semantic (multi turn text)

Test Coverage

  • Current coverage: 33%
  • Coverage after changes: 29%

Checklist

  • [x] My code follows the style guidelines of this project
  • [x] I have performed a self-review of my own code
  • [x] I have commented my code, particularly in hard-to-understand areas
  • [x] I have made corresponding changes to the documentation
  • [ ] My changes generate no new warnings
  • [x] Any dependent changes have been merged and published

UlrickBL avatar Oct 01 '25 21:10 UlrickBL

CLA assistant check
All committers have signed the CLA.

CLAassistant avatar Oct 01 '25 21:10 CLAassistant

nice! absolutely agree that we want to add VLMs eventually, just hasn't been at top of our priority list yet, though this implementation does look like a pretty nice starting point.

heads up that before merging, we'd want to ensure that:

  1. We can see stable training results on some toy tasks
  2. The implementation doesn't break any current usage patterns

For 2, we'll need to reorganize the codebase a bit so that some of the logic is properly contained in utility files that are only imported when needed. For example, we definitely don't want to add transformers (or anything that uses torch) as a core dependency, as it adds a lot of bloat/conflict potential and is unnecessary for API-based use cases. Would also be nice if possible to have PIL optional and work directly with b64 strings for eval-only use. The functions for processing outputs should probably have to move to other files that follow the lazy import/type-checking pattern used for other optional dependencies in the repo related to trainers -- would you want to take a stab at this?

willccbb avatar Oct 03 '25 03:10 willccbb

Sounds good, I'll work on that !

UlrickBL avatar Oct 03 '25 18:10 UlrickBL

Hello @willccbb ,

For point 2, I managed to clean up the problematic dependencies, handle lazy imports, and reorganize the code into utils/image_utils.py and utils/processing_utils.py. Is it ok with the current pattern ?

For point 1, I tried to find a task that was challenging enough to demonstrate the relevance of the training, but not too expensive to run.

I used an OCR environment I set up on Prime Hub (ocr-vl): https://app.primeintellect.ai/dashboard/environments/ulrick-bl/ocr-vl .

I trained Qwen 2.5 VL 3B on the "hi" (Hindi) scope, since the model doesn’t perform very well on this task. The reward is mainly based on format and CER.

Warning that there are some issues in the dataset I use as the base for the env such as this type of data where the screen fails because of popup and on which my small training setup was very sensitive : image

I trained with the following setup : Qwen 2.5 VL 3B with Lora rank 16 on 2xA100 40GB

args = vf.grpo_defaults(run_name="ocr-vl") args.per_device_train_batch_size = 8 args.num_generations = 16 args.gradient_accumulation_steps = 2 args.max_steps = 1000 args.eval_strategy = "steps" args.eval_steps = 2 args.max_tokens = 1024 args.vllm_server_port= 8000 args.fp16 = True args.temperature = 0.4 args.learning_rate = 1e-5 args.lr_scheduler_type = "cosine" args.warmup_steps = 10
image

I would say the training is stable and started off very well. The first slowdown in reward progression was due to a series of poor-quality images in the data, like the ones I showed earlier. Nevertheless, we can observe the model improving and maintaining stable training performance on the task, which highlights the relevance of the implementation.

If needed, I can spend some time cleaning the dataset and retraining it.

Let’s keep in touch if there’s anything else to adjust, test, or adapt.

UlrickBL avatar Oct 08 '25 21:10 UlrickBL

+1 @willccbb any idea when this MR might be merged or available

anaszil avatar Oct 29 '25 20:10 anaszil