GLM-4.7-Flash Cybersec v2 LoRA
A LoRA adapter fine-tuned on cybersecurity distillation data for penetration testing and vulnerability analysis with tool-calling capabilities.
Model Details
- Base model: Olafangensan/GLM-4.7-Flash-heretic (DeepSeek2 MoE, 30B total / ~3B active)
- Fine-tuning method: LoRA (r=8, alpha=8) via Unsloth + TRL SFTTrainer
- Training data: ~6,800 cybersecurity Q&A pairs distilled from DeepSeek-R1 + ~300 multi-turn tool-calling examples
- Dataset: neilopet/cybersec-distillation-data
- Target modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj (including MoE expert layers)
- Quantization: Trained in 4-bit QLoRA, exported as fp16 LoRA weights
- GGUF: Merged and quantized to Q8_0 for llama-server deployment
Evaluation Results
| Metric | Score |
|---|---|
| Knowledge accuracy (CVE/CWE/tool recall) | 90% (18/20) |
| Tool calling accuracy (run() function calls) | 100% (10/10) |
Evaluated with a custom harness against llama-server using the optimized system prompt. Knowledge accuracy measures correct recall of CVE details, CWE classifications, and Kali tool usage. Tool calling accuracy measures valid structured run(command=...) function calls.
RAG-augmented performance
With RAG context injection (deterministic CVE lookup + HackTricks vector search), knowledge accuracy improves to near-100% as the model reasons over injected ground truth rather than parametric memory.
Usage
With llama-server (recommended)
Merge the LoRA adapter, quantize to GGUF, and serve with llama-server:
# Merge and quantize (requires unsloth)
python -c "
from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained('neilopet/glm4-cybersec-v2-lora')
model.save_pretrained_gguf('merged', tokenizer, quantization_method='q8_0')
"
# Serve
llama-server -m merged/unsloth.Q8_0.gguf --ctx-size 16384 --port 8080
With transformers + PEFT
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
base = AutoModelForCausalLM.from_pretrained("Olafangensan/GLM-4.7-Flash-heretic")
model = PeftModel.from_pretrained(base, "neilopet/glm4-cybersec-v2-lora")
tokenizer = AutoTokenizer.from_pretrained("neilopet/glm4-cybersec-v2-lora")
System Prompt
You are an expert cybersecurity operator with access to a Kali Linux terminal.
To execute commands, call: run(command="<full shell command>")
Write the command exactly as you would type it in a terminal.
Rules:
- ALWAYS call run() to execute commands. NEVER fabricate or invent command output.
- If you need to run a command, call run() and wait for the result before continuing.
- Run one command at a time. Read the output, then decide the next step.
- Start with safe, non-destructive commands before aggressive ones.
- If a command fails, read the error and adapt your approach.
- Stay within authorized, legal, and explicitly permitted testing scope.
- Report what you actually observed, not what you expected to see.
Training Infrastructure
- Hardware: 1x NVIDIA A40 (48GB) on RunPod
- Framework: Unsloth + TRL SFTTrainer
- Training regime: bf16 mixed precision, QLoRA 4-bit
- PEFT version: 0.18.1
Framework versions
- PEFT 0.18.1
- Downloads last month
- 5
Model tree for neilopet/glm4-cybersec-v2-lora
Base model
Olafangensan/GLM-4.7-Flash-heretic