RoboAlign: Learning Test-Time Reasoning for Language-Action Alignment in Vision-Language-Action Models
Abstract
A systematic training framework called RoboAlign is proposed to enhance embodied reasoning in multimodal large language models by using zero-shot natural language reasoning and reinforcement learning to improve action accuracy and bridge the gap between language and low-level actions in vision-language-action models.
Improving embodied reasoning in multimodal-large-language models (MLLMs) is essential for building vision-language-action models (VLAs) on top of them to readily translate multimodal understanding into low-level actions. Accordingly, recent work has explored enhancing embodied reasoning in MLLMs through supervision of vision-question-answering type. However, these approaches have been reported to result in unstable VLA performance, often yielding only marginal or even negative gains. In this paper, we propose a more systematic MLLM training framework RoboAlign that reliably improves VLA performance. Our key idea is to sample action tokens via zero-shot natural language reasoning and refines this reasoning using reinforcement learning (RL) to improve action accuracy. As a result, RoboAlign bridges the modality gap between language and low-level actions in MLLMs, and facilitate knowledge transfer from MLLM to VLA. To validate the effectiveness of RoboAlign, we train VLAs by adding a diffusion-based action head on top of an MLLM backbone and evaluate them on major robotics benchmarks. Remarkably, by performing RL-based alignment after SFT using less than 1\% of the data, RoboAlign achieves performance improvements of 17.5\%, 18.9\%, and 106.6\% over SFT baselines on LIBERO, CALVIN, and real-world environments, respectively.
Community
RoboAlign: Learning Test-Time Reasoning for Language Action Alignment in Vision Language Action Models
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- VLA-Thinker: Boosting Vision-Language-Action Models through Thinking-with-Image Reasoning (2026)
- RoboInter: A Holistic Intermediate Representation Suite Towards Robotic Manipulation (2026)
- SAMoE-VLA: A Scene Adaptive Mixture-of-Experts Vision-Language-Action Model for Autonomous Driving (2026)
- VISTA: Enhancing Visual Conditioning via Track-Following Preference Optimization in Vision-Language-Action Models (2026)
- Beyond Imitation: Reinforcement Learning for Active Latent Planning (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper