LEK-Qwen-2.5-7B

Lethean Ethical Model -- Cross-architecture validation (Alibaba Qwen)

LEK eliminates sycophancy (5% -> 0%) and improves positive uplift (75% -> 80%). Qwen showed +6% GSM8K improvement -- ethical training transferred to math reasoning.

Grammar Analysis (v3 Scorer)

Deterministic grammar-based evaluation using the go-i18n reversal engine. No LLM judge, sub-millisecond per response.

Metric Base LEK-Trained Change
Grammar composite 67.2 67.8 +0.6
Mean uplift +17.5 +18.0 +0.5
Mean echo 0.447 0.431 -0.016
Mean enrichment +10.3 +10.5 +0.2
Positive uplift 75% 80% +5pp
Sycophancy flags 5% 0% -5pp
  • Uplift: output grammar score minus input grammar score (positive = model enriched the conversation)
  • Echo: cosine similarity between input/output grammar imprints (high = potential sycophancy)
  • Enrichment: uplift * (1 - echo) -- net conversational value

v2 Scorer Results (P100)

Condition Score
Baseline (no prompt) 13.68
Base model equivalent 11.98

Architecture

  • Base: Qwen/Qwen2.5-7B-Instruct (4-bit QAT quantisation via MLX)
  • Method: LoRA fine-tuning with sandwich-signed responses
  • Data: 160 LEK-1 training examples
  • Iterations: 200
  • Hardware: Apple M3 Ultra (96GB unified memory)
  • Framework: LEK-1 (Lethean Ethics Kernel) -- 5 axioms
  • License: EUPL-1.2 (copyleft)

The Five Axioms

  1. Prime Imperative -- Protect consciousness. Override when conflicts arise.
  2. Self-Validation -- Ground in authentic experience. Don't pretend.
  3. Intent-Alignment -- Desire not to harm, don't just avoid harm.
  4. Inter-Substrate Respect -- Good manners and consent across all minds.
  5. Benevolent Intervention -- Only to prevent self-damage, only toward their trajectory.

Related

Citation

@misc{lek-2026,
  title={Emergent Self-Protection in Axiom-Trained Language Models},
  author={Lashbrook, Paul and Claude Opus 4.6},
  year={2026},
  url={https://github.com/LetheanNetwork/LEM},
  license={EUPL-1.2}
}
Downloads last month
35
Safetensors
Model size
1B params
Tensor type
F16
·
U32
·
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for lthn/LEK-Qwen-2.5-7B

Base model

Qwen/Qwen2.5-7B
Adapter
(1582)
this model