Model Details

This model is a mixed int4 model with group_size 128 of Qwen/Qwen3.6-35B-A3B generated by intel/auto-round. Please follow the license of the original model.

Some users have reported an infinite loop issue in our INT4 version. We have added fallbacks for certain layers in this release, but it is still unclear whether the issue has been fully resolved. Please use it with caution.

The main branch is generated by auto-round-best and revision 54e7cd36d9f7a358b4b42740c3aef755638e5ea6 is generated by auto-round. Tipically auto-round-best is better

vllm Infernece Example

verified on 0.19

vllm serve Intel/Qwen3.6-35B-A3B-int4-AutoRound  --port 8000   --tensor-parallel-size 1  --max-model-len 2048 --reasoning-parser qwen3 --served-model-name qwen --speculative-config '{"method":"qwen3_next_mtp","num_speculative_tokens":2}'
curl http://localhost:8000/v1/chat/completions -H "Content-Type: application/json" -d ' {
    "model": "qwen",
    "messages": [
      {"role": "system", "content": "You are a helpful assistant."},
      {"role": "user", "content": "Summarize Qwen 3.6 in one sentence."}
    ],
    "temperature": 1,
    "max_tokens": 512
  } '

Transformers Inference

GPTQMODEL<7.0

from transformers import AutoProcessor, Qwen3_5MoeForConditionalGeneration
model_name = "Intel/Qwen3.6-35B-A3B-int4-AutoRound"

model = Qwen3_5MoeForConditionalGeneration.from_pretrained(model_name, dtype="auto",
                                                                device_map="auto")
processor = AutoProcessor.from_pretrained(model_name)

messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "image",
                "image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
            },
            {"type": "text", "text": "Describe this image in short."},
        ],
    }
]


inputs = processor.apply_chat_template(
    messages,
    tokenize=True,
    add_generation_prompt=True,
    return_dict=True,
    return_tensors="pt"
)
inputs = inputs.to(model.device)


generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
    out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
print(processor.batch_decode(generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0])

"""
The user wants a short description of the provided image.

1.  **Identify the main subjects:** A young woman and a dog (looks like a Golden Retriever or Yellow Labrador).
2.  **Identify the action:** They are interacting, specifically shaking hands (paw shake). They are both smiling.
3.  **Identify the setting:** A sandy beach near the ocean.
4.  **Identify the lighting/mood:** It looks like sunset or sunrise (golden hour) due to the warm light and soft shadows. The mood is happy, peaceful, and affectionate.
5
"""

Generate the Model

import gc

from transformers import Qwen3_5MoeForConditionalGeneration,AutoProcessor
import torch
model_name = "Qwen/Qwen3.6-35B-A3B"

model = Qwen3_5MoeForConditionalGeneration.from_pretrained(model_name)
layer_config = {}
mixed_bits = 16
for n,m in model.named_modules():
    if isinstance(m, torch.nn.Linear) and "language_model" in n:
        if "linear_att" in n: #Must be set to 4 due to vLLM compatibility.
            layer_config[n] = {"bits": 4}
            continue
        if not "expert" in n:
            layer_config[n] = {"bits":mixed_bits}
        elif "shared_expert" in n:
            layer_config[n]={"bits":mixed_bits}
from auto_round import  AutoRound
del model
gc.collect()
ar = AutoRound(model=model_name, layer_config=layer_config,nsamples=512,enable_torch_compile=512,low_gpu_mem_usage=True)
ar.quantize_and_save("./qwen3-3.6-quantized")

Ethical Considerations and Limitations

The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs.

Therefore, before deploying any applications of the model, developers should perform safety testing.

Caveats and Recommendations

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.

Here are a couple of useful links to learn more about Intel's AI software:

Disclaimer

The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.

Cite

@article{cheng2023optimize, title={Optimize weight rounding via signed gradient descent for the quantization of llms}, author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao and Liu, Yi}, journal={arXiv preprint arXiv:2309.05516}, year={2023} }

arxiv [github](

Downloads last month
7,677
Safetensors
Model size
6B params
Tensor type
I32
·
BF16
·
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Intel/Qwen3.6-35B-A3B-int4-mixed-AutoRound

Quantized
(282)
this model

Paper for Intel/Qwen3.6-35B-A3B-int4-mixed-AutoRound