Text Generation
Transformers
Safetensors
English
qwen2
code
codeqwen
chat
qwen
qwen-coder
fp8
llm-compressor
compressed-tensors
vllm
conversational
text-generation-inference
How to use from
SGLangUse Docker images
docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "RedHatAI/Qwen2.5-Coder-14B-Instruct-FP8-dynamic" \
--host 0.0.0.0 \
--port 30000# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "RedHatAI/Qwen2.5-Coder-14B-Instruct-FP8-dynamic",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'Quick Links
Model Overview
- Model Architecture: Qwen2ForCausalLM
- Input: Text
- Output: Text
- Model Optimizations:
- Weight quantization: FP8
- Activation quantization: FP8
- Release Date: 11/28/2024
- Version: 1.0
- Model Developers: Red Hat
Quantized version of Qwen/Qwen2.5-Coder-14B-Instruct.
Model Optimizations
This model was obtained by quantizing the weights and activations of Qwen/Qwen2.5-Coder-14B-Instruct to FP8 data type. This optimization reduces the number of bits per parameter from 16 to 8, reducing the disk size and GPU memory requirements by approximately 50%. Only the weights and activations of the linear operators within transformers blocks are quantized.
Deployment
Use with vLLM
- Initialize vLLM server:
vllm serve RedHatAI/Qwen2.5-Coder-14B-Instruct-FP8-dynamic
- Send requests to the server:
from openai import OpenAI
# Modify OpenAI's API key and API base to use vLLM's API server.
openai_api_key = "EMPTY"
openai_api_base = "http://<your-server-host>:8000/v1"
client = OpenAI(
api_key=openai_api_key,
base_url=openai_api_base,
)
model = "RedHatAI/Qwen2.5-Coder-14B-Instruct-FP8-dynamic"
messages = [
{"role": "user", "content": "Write a quick sort algorithm."},
]
outputs = client.chat.completions.create(
model=model,
messages=messages,
)
generated_text = outputs.choices[0].message.content
print(generated_text)
Creation
This model was created with llm-compressor by running the code snippet below.
Model Creation Code
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor.transformers import oneshot
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load model
model_stub = "Qwen/Qwen2.5-Coder-14B-Instruct"
model_name = model_stub.split("/")[-1]
model = AutoModelForCausalLM.from_pretrained(model_stub, dtype="auto")
tokenizer = AutoTokenizer.from_pretrained(model_stub)
# Configure the quantization algorithm and scheme
recipe = QuantizationModifier(
ignore=["lm_head"],
targets="Linear",
scheme="FP8_dynamic",
)
# Apply quantization
oneshot(
model=model,
recipe=recipe,
)
# Save to disk in compressed-tensors format
save_path = model_name + "-FP8-dynamic"
model.save_pretrained(save_path)
tokenizer.save_pretrained(save_path)
print(f"Model and tokenizer saved to: {save_path}")
- Downloads last month
- 441
Model tree for RedHatAI/Qwen2.5-Coder-14B-Instruct-FP8-dynamic
Base model
Qwen/Qwen2.5-14B Finetuned
Qwen/Qwen2.5-Coder-14B Finetuned
Qwen/Qwen2.5-Coder-14B-Instruct
Install from pip and serve model
# Install SGLang from pip: pip install sglang# Start the SGLang server: python3 -m sglang.launch_server \ --model-path "RedHatAI/Qwen2.5-Coder-14B-Instruct-FP8-dynamic" \ --host 0.0.0.0 \ --port 30000# Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "RedHatAI/Qwen2.5-Coder-14B-Instruct-FP8-dynamic", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'