Llama 3.2 1B CyberSec (GGUF)
This repository contains GGUF quantizations of a fine-tuned Llama 3.2 1B Instruct model specialized for cybersecurity and secure coding.
Available Quants
- FP16
- Q4_0
- Q4_1
- Q4_K_M (recommended)
- Q4_K_S
- Q5_0
- Q5_1
- Q5_K_M
- Q6_K
Usage (llama.cpp)
./llama-cli -m llama3.2-cybersec-Q4_K_M.gguf
Usage (Ollama)
ollama create llama3.2-cybersec -f Modelfile-llama3.2-cybersec-Q4_K_M
ollama run llama3.2-cybersec
Notes
- This is a derived model
- Base model: meta-llama/Llama-3.2-1B-Instruct
- GGUF format for local inference only
- Downloads last month
- 318
Hardware compatibility
Log In
to view the estimation
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support