Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
Kukedlc
/
ArchBeagle-7B-GGUF
like
0
GGUF
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
Model By Maxime Labonne Cuantized in llama-cpp Q_4_K_M y Q_5_K_M
Model By Maxime Labonne Cuantized in llama-cpp Q_4_K_M y Q_5_K_M
Downloads last month
-
GGUF
Model size
7B params
Architecture
llama
Hardware compatibility
Log In
to add your hardware
4-bit
Q4_K_M
4.37 GB
5-bit
Q5_K_M
5.13 GB
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support