LFM2-2.6B - Quantized GGUF Model

This is a quantized GGUF model (Q8_0) compatible with Ollama.

Model Details

  • Base Model: LiquidAI/LFM2-2.6B
  • Quantization: Q8_0
  • Framework: Ollama

Usage with Ollama

You can pull and run this model directly with Ollama:

ollama pull hf.co/Sadiah/ollama-q8_0-LFM2-2.6B:Q8_0

Then run it:

ollama run hf.co/Sadiah/ollama-q8_0-LFM2-2.6B:Q8_0 "Write your prompt here"

Features

  • Efficient quantization (Q8_0) for faster inference
  • Compatible with Ollama's inference engine

License

Please refer to the original model card for licensing information.

Downloads last month
5
GGUF
Model size
3B params
Architecture
lfm2
Hardware compatibility
Log In to view the estimation

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Sadiah/ollama-q8_0-LFM2-2.6B

Base model

LiquidAI/LFM2-2.6B
Quantized
(20)
this model

Collection including Sadiah/ollama-q8_0-LFM2-2.6B