VLA-Adapter: An Effective Paradigm for Tiny-Scale Vision-Language-Action Model
Paper • 2509.09372 • Published • 253
The models of VLA-Adapter
Note Inference log: https://huggingface.co/VLA-Adapter/LIBERO-Long/blob/main/Inference-Long--95.0.log
Note Inference log: https://huggingface.co/VLA-Adapter/LIBERO-Spatial/blob/main/Inference-Spatial--97.8.log
Note Inference log: https://huggingface.co/VLA-Adapter/LIBERO-Goal/blob/main/Inference-Goal--97.2.log
Note Inference log: https://huggingface.co/VLA-Adapter/LIBERO-Object/blob/main/Inference-Object--99.2.log