Agent World Model: Infinity Synthetic Environments for Agentic Reinforcement Learning
Abstract
Large language model agents trained in synthetic environments with code-driven simulations and database-backed state transitions demonstrate superior out-of-distribution generalization compared to traditional benchmark-specific approaches.
Recent advances in large language model (LLM) have empowered autonomous agents to perform complex tasks that require multi-turn interactions with tools and environments. However, scaling such agent training is limited by the lack of diverse and reliable environments. In this paper, we propose Agent World Model (AWM), a fully synthetic environment generation pipeline. Using this pipeline, we scale to 1,000 environments covering everyday scenarios, in which agents can interact with rich toolsets (35 tools per environment on average) and obtain high-quality observations. Notably, these environments are code-driven and backed by databases, providing more reliable and consistent state transitions than environments simulated by LLMs. Moreover, they enable more efficient agent interaction compared with collecting trajectories from realistic environments. To demonstrate the effectiveness of this resource, we perform large-scale reinforcement learning for multi-turn tool-use agents. Thanks to the fully executable environments and accessible database states, we can also design reliable reward functions. Experiments on three benchmarks show that training exclusively in synthetic environments, rather than benchmark-specific ones, yields strong out-of-distribution generalization. The code is available at https://github.com/Snowflake-Labs/agent-world-model.
Community
Agent World Model: Infinity Synthetic Environments for Agentic Reinforcement Learning
๐ Introducing Agent World Model (AWM) โ we synthesized 1,000 code-driven environments with 35K tools and 10K tasks for large-scale agentic reinforcement learning!
No real APIs. No human design. Just 100 seed names โ fully functional, database-backed agent environments exposed via MCP interface.
Agents trained purely on synthetic envs generalize to out-of-distribution benchmarks. Code, Environments, & Models all open-sourced. ๐ฅ
We train Qwen3 (4B/8B/14B) with online RL using GRPO algorithm at serious scale:
โก 1,024 parallel env instances per training step
๐ฏ Hybrid reward: step-level format checks + task-level outcome verification
๐ง History-aware training: align sliding-window truncation between training & inference
Key insight: code-driven environments give more stable learning signals than LLM-simulated ones, and they're orders of magnitude faster.
Results on 3 out-of-distribution benchmarks (AWM does NOT target any benchmark specific ones):
๐ BFCLv3: 8B jumps 53.83 โ 65.94 (+12.11)
๐ ฯยฒ-bench: competitive, 14B reaches 39.03 Pass@1
๐ MCP-Universe: best overall, 8B: 6.70 โ 11.17
๐ AWM is the ONLY method that improves over Base on ALL three benchmarks.
๐ Paper: https://arxiv.org/abs/2602.10090
๐ป Code: https://github.com/Snowflake-Labs/agent-world-model
๐ค Huggingface: https://huggingface.co/datasets/Snowflake/AgentWorldModel-1K
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Mock Worlds, Real Skills: Building Small Agentic Language Models with Synthetic Tasks, Simulated Environments, and Rubric-Based Rewards (2026)
- From Self-Evolving Synthetic Data to Verifiable-Reward RL: Post-Training Multi-turn Interactive Tool-Using Agents (2026)
- ScaleEnv: Scaling Environment Synthesis from Scratch for Generalist Interactive Tool-Use Agent Training (2026)
- AutoForge: Automated Environment Synthesis for Agentic Reinforcement Learning (2025)
- EnvScaler: Scaling Tool-Interactive Environments for LLM Agent via Programmatic Synthesis (2026)
- ASTRA: Automated Synthesis of agentic Trajectories and Reinforcement Arenas (2026)
- Close the Loop: Synthesizing Infinite Tool-Use Data via Multi-Agent Role-Playing (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 3
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper