AxionML Qwen3.5-122B-A10B-NVFP4

Developed by AxionML for open-source serving and deployment use cases. Part of AxionML's effort to provide ready-to-serve quantized models for the community.

This is an NVFP4-quantized version of Qwen/Qwen3.5-122B-A10B (122B (10B active) parameters), quantized using NVIDIA TensorRT Model Optimizer. Weights and activations of linear layers are quantized to FP4, reducing disk size and GPU memory by ~4x compared to BF16.

About NVFP4 quantization: NVFP4 on Blackwell couples a compact E2M1 FP4 codebook with blockwise FP8 (E4M3) scaling over 16-element micro-blocks, so that 4-bit stored values remain numerically useful for neural-network computation. The E2M1 codebook provides a small, nonuniform set of representable magnitudes up to ±6 and relies on saturating behavior rather than IEEE NaN/Inf encodings to maximize usable range per bit. Using an FP8 block scale (rather than power-of-two-only E8M0) enables fractional scales and error-minimizing scale selection strategies such as dual-pass evaluation comparing "map max to 6" versus "map max to 4 with clipping." On Blackwell Tensor Cores, native FP4 multipliers exploit E2M1 simplicity to reduce multiplier area while higher-precision FP32 accumulation protects dot-product accuracy.

Ready for commercial and non-commercial use under Apache 2.0.

Over recent months, we have intensified our focus on developing foundation models that deliver exceptional utility and performance. Qwen3.5 represents a significant leap forward, integrating breakthroughs in multimodal learning, architectural efficiency, reinforcement learning scale, and global accessibility to empower developers and enterprises with unprecedented capability and efficiency.

Qwen3.5 Highlights

Qwen3.5 features the following enhancement:

  • Unified Vision-Language Foundation: Early fusion training on multimodal tokens achieves cross-generational parity with Qwen3 and outperforms Qwen3-VL models across reasoning, coding, agents, and visual understanding benchmarks.

  • Efficient Hybrid Architecture: Gated Delta Networks combined with sparse Mixture-of-Experts deliver high-throughput inference with minimal latency and cost overhead.

  • Scalable RL Generalization: Reinforcement learning scaled across million-agent environments with progressively complex task distributions for robust real-world adaptability.

  • Global Linguistic Coverage: Expanded support to 201 languages and dialects, enabling inclusive, worldwide deployment with nuanced cultural and regional understanding.

  • Next-Generation Training Infrastructure: Near-100% multimodal training efficiency compared to text-only training and asynchronous RL frameworks supporting massive-scale agent scaffolds and environment orchestration.

Benchmark Results

For more details, please refer to our blog post Qwen3.5.

Model Overview

  • Type: Causal Language Model with Vision Encoder
  • Training Stage: Pre-training & Post-training
  • Language Model
    • Number of Parameters: 122B in total and 10B activated
    • Hidden Dimension: 3072
    • Token Embedding: 248320 (Padded)
    • Number of Layers: 48
    • Hidden Layout: 12 × (3 × (Gated DeltaNet → MoE) → 1 × (Gated Attention → MoE))
    • Gated DeltaNet:
      • Number of Linear Attention Heads: 64 for V and 16 for QK
      • Head Dimension: 128
    • Gated Attention:
      • Number of Attention Heads: 32 for Q and 2 for KV
      • Head Dimension: 256
      • Rotary Position Embedding Dimension: 64
    • Mixture Of Experts
      • Number of Experts: 256
      • Number of Activated Experts: 8 Routed + 1 Shared
      • Expert Intermediate Dimension: 1024
    • LM Output: 248320 (Padded)
    • MTP: trained with multi-steps
  • Context Length: 262,144 natively and extensible up to 1,010,000 tokens.

Benchmark Results

Language

Qwen3.5-122B-A10BQwen3.5-122B-A10B-NVFP4
Knowledge
MMLU-Pro 86.7 85.4
MMLU-Redux 94.0 92.1
C-Eval 91.9 89.6
SuperGPQA 67.1 66.4
Instruction Following
IFEval 93.4 90.8
IFBench 76.1 74.2
MultiChallenge 61.5 60.4
Long Context
AA-LCR 66.9 65.2
LongBench v2 60.2 59.4
STEM & Reasoning
HLE w/ CoT 25.3 24.8
GPQA Diamond 86.6 85.1
HMMT Feb 25 91.4 89.8
HMMT Nov 25 90.3 89.1
Coding
SWE-bench Verified 72.0 70.9
Terminal Bench 2 49.4 48.5
LiveCodeBench v6 78.9 77.7
CodeForces 2100 2073.5
OJBench 39.5 39.0
FullStackBench en 62.6 61.5
FullStackBench zh 58.7 57.8
General Agent
BFCL-V4 72.2 70.9
TAU2-Bench 79.5 77.9
VITA-Bench 33.6 33.0
DeepPlanning 24.1 23.5
Search Agent
HLE w/ tool 47.5 45.8
Browsecomp 63.8 61.5
Browsecomp-zh 69.9 68.5
WideSearch 60.5 59.4
Seal-0 44.1 43.1
Multilingualism
MMMLU 86.7 84.4
MMLU-ProX 82.2 79.2
NOVA-63 58.6 56.5
INCLUDE 82.8 80.9
Global PIQA 88.4 87.3
PolyMATH 68.9 67.4
WMT24++ 78.3 76.0
MAXIFE 87.9 86.9

* CodeForces: evaluated on our own query set.
* TAU2-Bench: we follow the official setup except for the airline domain, where all models are evaluated by applying the fixes proposed in the Claude Opus 4.5 system card.
* Search Agent: most search agents built on our model adopt a simple context-folding strategy(256k): once the cumulative Tool Response length reaches a preset threshold, earlier Tool Responses are pruned from the history to keep the context within limits.
* WideSearch: we use a 256k context window without any context management.
* MMLU-ProX: we report the averaged accuracy on 29 languages.
* WMT24++: a harder subset of WMT24 after difficulty labeling and rebalancing; we report the averaged scores on 55 languages using XCOMET-XXL.
* MAXIFE: we report the accuracy on English + multilingual original prompts (totally 23 settings).
* Empty cells (--) indicate scores not yet available or not applicable.

Vision Language

Qwen3.5-122B-A10BQwen3.5-122B-A10B-NVFP4
STEM and Puzzle
MMMU 83.9 81.7
MMMU-Pro 76.9 74.8
MathVision 86.2 84.2
Mathvista(mini) 87.4 86.3
DynaMath 85.9 84.3
ZEROBench 9 8.8
ZEROBench_sub 36.2 35.2
VlmsAreBlind 96.7 95.1
BabyVision 40.2 / 34.5 40.2 / 34.5
General VQA
RealWorldQA 85.1 82.1
MMStar 82.9 80.4
MMBenchEN-DEV-v1.1 92.8 91.4
SimpleVQA 61.7 60.2
HallusionBench 67.6 65.6
Text Recognition and Document Understanding
OmniDocBench1.5 89.8 87.3
CharXiv(RQ) 77.2 74.9
MMLongBench-Doc 59.0 57.5
CC-OCR 81.8 79.4
AI2D_TEST 93.3 90.7
OCRBench 92.1 90.6
Spatial Intelligence
ERQA 62.0 61.1
CountBench 97.0 95.7
RefCOCO(avg) 91.3 89.3
ODInW13 44.5 43.3
EmbSpatialBench 83.9 82.4
RefSpatialBench 69.3 68.2
LingoQA 80.8 79.1
Hypersim 12.7 12.2
SUNRGBD 36.2 35.7
Nuscene 15.4 15.1
Video Understanding
VideoMME(w sub.) 87.3 85.2
VideoMME(w/o sub.) 83.9 82.5
VideoMMMU 82.0 78.9
MLVU 87.3 85.9
MVBench 76.6 75.3
LVBench 74.4 72.8
MMVU 74.7 73.7
Visual Agent
ScreenSpot Pro 70.4 68.8
OSWorld-Verified 58.0 56.6
AndroidWorld 66.4 65.1
Tool Calling
TIR-Bench 53.2 / 42.5 53.2 / 42.5
V* 93.2 / 90.1 93.2 / 90.1
Medical VQA
SLAKE 81.6 80.1
PMC-VQA 63.3 62.2
MedXpertQA-MM 67.3 65.9

* MathVision: our model’s score is evaluated using a fixed prompt, e.g., “Please reason step by step, and put your final answer within \boxed{}.” For other models, we report the higher score between runs with and without the \boxed{} formatting.
* BabyVision: scores reported as "with CI / without CI".
* TIR-Bench and V*: scores reported as "with CI / without CI".
* Empty cells (--) indicate scores not yet available or not applicable.

Quantization Details

This model was quantized by applying NVFP4 to the weights and activations of linear operators within transformer blocks. The KV-cache is not quantized. Vision encoder weights are kept in their original precision.

Usage

Deploy with SGLang

python3 -m sglang.launch_server \
    --model-path AxionML/Qwen3.5-122B-A10B-NVFP4 \
    --quantization modelopt_fp4 \
    --tp 2 \
    --reasoning-parser qwen3

Reproduce with ModelOpt

python3 examples/llm_ptq/hf_ptq.py \
    --pyt_ckpt_path Qwen/Qwen3.5-122B-A10B \
    --qformat nvfp4_mse \
    --export_path ./qwen3.5-122b-a10b-nvfp4

Limitations

The base model was trained on data that may contain toxic language and societal biases. The quantized model inherits these limitations. It may generate inaccurate, biased, or offensive content. Please refer to the original model card for full details.

Downloads last month
311
Safetensors
Model size
62B params
Tensor type
BF16
·
F8_E4M3
·
U8
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for AxionML/Qwen3.5-122B-A10B-NVFP4

Quantized
(36)
this model