Model Details
This model is a mixed int4 model with group_size 128 and symmetric quantization of zai-org/GLM-4.5-Air generated by intel/auto-round via RTN(without algorithm tuning). Non expert layers are fallback to 8 bits. Please refer to Section Generate the model for more details. Please follow the license of the original model.
How To Use
INT4 Inference
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
MODEL_PATH = "Intel/GLM-4.5-Air-int4-mixed-AutoRound"
messages = [{"role": "user", "content": "Give me a short introduction to large language model."}]
tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH)
inputs = tokenizer.apply_chat_template(
messages,
tokenize=True,
add_generation_prompt=True,
return_dict=True,
return_tensors="pt",
)
model = AutoModelForCausalLM.from_pretrained(
pretrained_model_name_or_path=MODEL_PATH,
torch_dtype=torch.bfloat16,
device_map="auto",
)
inputs = inputs.to(model.device)
inputs.pop("token_type_ids")
generated_ids = model.generate(**inputs, max_new_tokens=512, do_sample=False)
output_text = tokenizer.decode(generated_ids[0][inputs.input_ids.shape[1] :])
print(output_text)
"""
<think>We are writing a short introduction to large language models (LLMs).
Key points to cover:
1. What they are: AI models trained on vast amounts of text data.
2. How they work: Based on transformer architecture, using deep learning.
3. What they can do: Generate human-like text, answer questions, translate languages, summarize, etc.
4. Examples: Mention well-known models like GPT, BERT, etc.
5. Significance: They represent a major advancement in natural language processing and have broad applications.
Let's keep it concise and informative.</think>Large language models (LLMs) are advanced artificial intelligence systems trained on vast amounts of text data, enabling them to understand, generate, and refine human-like language. Built on deep learning architectures鈥攖ypically transformers鈥攖hese models leverage patterns from diverse sources (books, articles, websites) to perform tasks like answering questions, writing essays, translating languages, summarizing content, and even coding.
Key characteristics include:
- **Scale**: Trained on billions (or trillions) of parameters, allowing nuanced comprehension.
- **Versatility**: Adapted for applications from chatbots (e.g., ChatGPT) to research tools.
- **Emergent Abilities**: Skills not explicitly programmed, such as reasoning or creativity, emerge as the model grows.
Prominent examples include OpenAI鈥檚 GPT series, Google鈥檚 Gemini, and Meta鈥檚 LLaMA. While transformative, LLMs also raise ethical concerns about bias, misinformation, and energy consumption. They represent a leap in natural language processing, bridging human communication and machine intelligence.<|user|>
"""
Generate the model
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
import transformers
from auto_round import AutoRound
from auto_round.utils import llm_load_model
model_name = "zai-org/GLM-4.5-air"
model, tokenizer = llm_load_model(model_name, device="cpu")
layer_config = {}
for n, m in model.named_modules():
if isinstance(m, torch.nn.Linear):
if "expert" in n and "shared_experts" not in n:
layer_config[n] = {"bits": 4}
elif n != "lm_head":
layer_config[n] = {"bits": 8}
ar = AutoRound(model, tokenizer, iters=0, layer_config=layer_config, disable_opt_rtn=True)
ar.quantize_and_save(format="auto_round", output_dir="tmp_autoround")
Ethical Considerations and Limitations
The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
Therefore, before deploying any applications of the model, developers should perform safety testing.
Caveats and Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
Here are a couple of useful links to learn more about Intel's AI software:
Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.
Cite
@article{cheng2025signroundv2,
title={SignRoundV2: Closing the Performance Gap in Extremely Low-Bit Post-Training Quantization for LLMs},
author={Cheng, Wenhua and Zhang, Weiwei and Guo, Heng and Shen, Haihao},
journal={arXiv preprint arXiv:2512.04746},
year={2025}
}
- Downloads last month
- 8
Model tree for Intel/GLM-4.5-Air-int4-mixed-AutoRound
Base model
zai-org/GLM-4.5-Air