๐จโ ๏ธ I HAVE REACHED HUGGING FACE'S FREE STORAGE LIMIT โ ๏ธ๐จ
Uploading new APEX quants is getting harder โ beyond HF storage limits, I also no longer have access to a beefy GPU to quantize larger MoE models.
I host 25+ free APEX MoE quantizations as an independent contributor and this work is unpaid.
I'll keep doing my best to publish new models โ your support makes it possible to continue.
๐ Patreon (Monthly) | โ Buy Me a Coffee | โญ GitHub Sponsors
Every contribution goes directly toward Hugging Face storage fees and compute to keep APEX quants free for everyone.
LFM2-24B-A2B APEX GGUF
APEX (Adaptive Precision for EXpert Models) quantizations of LFM2-24B-A2B by LiquidAI.
Brought to you by the LocalAI team | APEX Project | Technical Report
Benchmark Results
Benchmarks coming soon. For reference APEX benchmarks on the Qwen3.5-35B-A3B architecture, see mudler/Qwen3.5-35B-A3B-APEX-GGUF.
What is APEX?
APEX is a quantization strategy for Mixture-of-Experts (MoE) models. It classifies tensors by role (routed expert, shared expert, attention) and applies a layer-wise precision gradient -- edge layers get higher precision, middle layers get more aggressive compression. I-variants use diverse imatrix calibration (chat, code, reasoning, tool-calling, agentic traces, Wikipedia).
See the APEX project for full details, technical report, and scripts.
Architecture
- Model: LFM2-24B-A2B (lfm2_moe) by LiquidAI
- Layers: 40 (30 convolutional + 10 full attention, hybrid)
- Experts: 64 routed (4 active per token) + 2 dense layers
- Total Parameters: 24B
- Active Parameters: ~2B per token
- APEX Config: 5+5 symmetric edge gradient across 40 layers
- Calibration: v1.3 diverse dataset (chat, code, reasoning, multilingual, tool-calling, Wikipedia)
Run with LocalAI
local-ai run mudler/LFM2-24B-A2B-APEX-GGUF@LFM2-24B-A2B-APEX-I-Balanced.gguf
Credits
APEX is brought to you by the LocalAI team. Developed through human-driven, AI-assisted research. Built on llama.cpp.
- Downloads last month
- 2,560
We're not able to determine the quantization variants.
Model tree for mudler/LFM2-24B-A2B-APEX-GGUF
Base model
LiquidAI/LFM2-24B-A2B