-
-
-
-
-
-
Inference Providers
Active filters:
llama-2
meta-llama/Llama-2-7b-chat-hf
Text Generation
•
7B
•
Updated
•
437k
•
4.7k
Text Generation
•
7B
•
Updated
•
459k
•
2.26k
meta-llama/Llama-2-13b-hf
Text Generation
•
13B
•
Updated
•
29.3k
•
621
TheBloke/Llama-2-7B-Chat-GGML
Text Generation
•
Updated
•
350
•
872
NousResearch/Llama-2-7b-hf
Text Generation
•
7B
•
Updated
•
62.8k
•
171
NousResearch/Nous-Hermes-Llama2-13b
Text Generation
•
13B
•
Updated
•
992
•
320
codellama/CodeLlama-34b-Instruct-hf
Text Generation
•
34B
•
Updated
•
19.5k
•
303
TheBloke/CodeLlama-7B-Instruct-GGUF
Text Generation
•
7B
•
Updated
•
8.36k
•
147
TheBloke/CodeLlama-34B-Python-fp16
Text Generation
•
34B
•
Updated
•
1.86k
•
14
Text Generation
•
7B
•
Updated
•
3.7k
•
242
DavidAU/Psyonic-Cetacean-Ultra-Quality-20b-GGUF-imatrix
Text Generation
•
20B
•
Updated
•
424
•
13
DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters
Updated
•
158
Updated
•
12
•
1
Text Generation
•
Updated
•
321
•
4.45k
meta-llama/Llama-2-7b-chat
Text Generation
•
Updated
•
157
•
613
Text Generation
•
Updated
•
61
•
352
meta-llama/Llama-2-13b-chat
Text Generation
•
Updated
•
24
•
295
Text Generation
•
Updated
•
24
•
538
meta-llama/Llama-2-70b-hf
Text Generation
•
69B
•
Updated
•
9.03k
•
•
856
meta-llama/Llama-2-13b-chat-hf
Text Generation
•
13B
•
Updated
•
231k
•
•
1.11k
meta-llama/Llama-2-70b-chat
Text Generation
•
Updated
•
5
•
398
meta-llama/Llama-2-70b-chat-hf
Text Generation
•
69B
•
Updated
•
18.8k
•
2.2k
Text Generation
•
Updated
•
94
•
219
Text Generation
•
7B
•
Updated
•
16.2k
•
81
TheBloke/Llama-2-13B-GPTQ
Text Generation
•
13B
•
Updated
•
538
•
120
TheBloke/Llama-2-13B-GGML
Text Generation
•
Updated
•
64
•
175
TheBloke/Llama-2-7B-Chat-GPTQ
Text Generation
•
7B
•
Updated
•
13.2k
•
267
TheBloke/Llama-2-13B-chat-GGML
Text Generation
•
Updated
•
111
•
696
anonymous4chan/llama-2-7b
Text Generation
•
7B
•
Updated
•
12
TheBloke/Llama-2-13B-chat-GPTQ
Text Generation
•
13B
•
Updated
•
474
•
363