How to use from
vLLM
Install from pip and serve model
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "defog/llama-3-sqlcoder-8b"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/chat/completions" \
	-H "Content-Type: application/json" \
	--data '{
		"model": "defog/llama-3-sqlcoder-8b",
		"messages": [
			{
				"role": "user",
				"content": "What is the capital of France?"
			}
		]
	}'
Use Docker
docker model run hf.co/defog/llama-3-sqlcoder-8b
Quick Links

A capable language model for text to SQL generation for Postgres, Redshift and Snowflake that is on-par with the most capable generalist frontier models.

image/png

Model Description

Developed by: Defog, Inc Model type: [Text to SQL] License: [CC-by-SA-4.0] Finetuned from model: [Meta-Llama-3-8B-Instruct]

Demo Page

https://defog.ai/sqlcoder-demo/

Ideal prompt and inference parameters

Set temperature to 0, and do not do sampling.

Prompt

<|begin_of_text|><|start_header_id|>user<|end_header_id|>

Generate a SQL query to answer this question: `{user_question}`
{instructions}

DDL statements:
{create_table_statements}<|eot_id|><|start_header_id|>assistant<|end_header_id|>

The following SQL query best answers the question `{user_question}`:
```sql

Evaluation

This model was evaluated on SQL-Eval, a PostgreSQL based evaluation framework developed by Defog for testing and alignment of model capabilities.

You can read more about the methodology behind SQLEval here.

Contact

Contact us on X at @defogdata, or on email at founders@defog.ai

Downloads last month
2,131
Inference Providers NEW
Input a message to start chatting with defog/llama-3-sqlcoder-8b.

Model tree for defog/llama-3-sqlcoder-8b

Adapters
230 models
Finetunes
16 models
Merges
8 models
Quantizations
41 models

Spaces using defog/llama-3-sqlcoder-8b 19