Rebeca Moen
Jan 26, 2026 23:09
Together AI’s DSGym framework benchmarks LLM agents on 90+ bioinformatics tasks and 92 Kaggle competitions. Their 4B parameter model matches larger rivals.
Together AI has released DSGym, a comprehensive framework for evaluating and training AI agents designed to perform data science tasks autonomously. The framework includes over 90 bioinformatics challenges and 92 Kaggle competition datasets, providing standardized benchmarks that address fragmentation issues plaguing existing evaluation methods.
The standout claim: Together AI’s 4 billion parameter model, trained using DSGym’s synthetic trajectory generation, achieves performance competitive with models 50 times its size on certain benchmarks.
Benchmark Results Show Surprising Efficiency
The published benchmarks reveal interesting performance dynamics across model sizes. Together AI’s Qwen3-4B-DSGym-SFT-2k model—fine-tuned using the framework—scored 59.36% on QRData-Verified and 77.78% on DABStep-easy tasks. That puts it ahead of the base Qwen3-4B-Instruct model (45.27% and 58.33% respectively) and competitive with models like Deepseek-v3.1 and GPT-OSS-120B on several metrics.
Claude 4.5 Sonnet currently leads the pack on harder tasks, hitting 37.04% on DABStep-hard compared to the fine-tuned 4B model’s 33.07%. But the gap narrows considerably given the massive difference in model scale.
Kimi-K2-Instruct posted the highest QRData-Verified score at 63.68%, while GPT-4o achieved 92.26% on DAEval-Verified—suggesting different architectures excel at different task types.
Why This Matters for AI Development
DSGym tackles a real problem in the AI agent space. Current benchmarks suffer from inconsistent evaluation interfaces and limited task diversity, making it difficult to compare agent performance meaningfully. The framework’s modular architecture allows researchers to add new tasks, agent scaffolds, and tools without rebuilding from scratch.
The execution-verified data synthesis pipeline is particularly notable. Rather than training on static datasets, the system generates synthetic training trajectories that are validated through actual code execution—reducing the garbage-in-garbage-out problem that hampers many AI training pipelines.
For companies building AI-powered data analysis tools, DSGym provides a standardized way to measure progress. The bioinformatics focus (DSBio) and prediction task coverage (DSPredict) extend beyond generic coding benchmarks into domain-specific applications where AI agents could deliver real productivity gains.
What’s Next
The framework is positioned as an evolving testbed rather than a static benchmark suite. Together AI has emphasized the extensibility angle, suggesting they’ll continue adding task categories and evaluation metrics. With AI agent development accelerating across the industry, having a common evaluation standard could help separate genuine capability improvements from benchmark gaming—though that’s always easier said than done.
Image source: Shutterstock
