Instructions to use rleo/function-gemma-finetuned-tool-call with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use rleo/function-gemma-finetuned-tool-call with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="rleo/function-gemma-finetuned-tool-call", filename="function-gemma-finetuned-tool-call.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": "What is the capital of France?" } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use rleo/function-gemma-finetuned-tool-call with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf rleo/function-gemma-finetuned-tool-call # Run inference directly in the terminal: llama-cli -hf rleo/function-gemma-finetuned-tool-call
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf rleo/function-gemma-finetuned-tool-call # Run inference directly in the terminal: llama-cli -hf rleo/function-gemma-finetuned-tool-call
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf rleo/function-gemma-finetuned-tool-call # Run inference directly in the terminal: ./llama-cli -hf rleo/function-gemma-finetuned-tool-call
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf rleo/function-gemma-finetuned-tool-call # Run inference directly in the terminal: ./build/bin/llama-cli -hf rleo/function-gemma-finetuned-tool-call
Use Docker
docker model run hf.co/rleo/function-gemma-finetuned-tool-call
- LM Studio
- Jan
- vLLM
How to use rleo/function-gemma-finetuned-tool-call with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "rleo/function-gemma-finetuned-tool-call" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "rleo/function-gemma-finetuned-tool-call", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/rleo/function-gemma-finetuned-tool-call
- Ollama
How to use rleo/function-gemma-finetuned-tool-call with Ollama:
ollama run hf.co/rleo/function-gemma-finetuned-tool-call
- Unsloth Studio new
How to use rleo/function-gemma-finetuned-tool-call with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for rleo/function-gemma-finetuned-tool-call to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for rleo/function-gemma-finetuned-tool-call to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for rleo/function-gemma-finetuned-tool-call to start chatting
- Pi new
How to use rleo/function-gemma-finetuned-tool-call with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf rleo/function-gemma-finetuned-tool-call
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "rleo/function-gemma-finetuned-tool-call" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use rleo/function-gemma-finetuned-tool-call with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf rleo/function-gemma-finetuned-tool-call
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default rleo/function-gemma-finetuned-tool-call
Run Hermes
hermes
- Docker Model Runner
How to use rleo/function-gemma-finetuned-tool-call with Docker Model Runner:
docker model run hf.co/rleo/function-gemma-finetuned-tool-call
- Lemonade
How to use rleo/function-gemma-finetuned-tool-call with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull rleo/function-gemma-finetuned-tool-call
Run and chat with the model
lemonade run user.function-gemma-finetuned-tool-call-{{QUANT_TAG}}List all available models
lemonade list
llm.create_chat_completion(
messages = [
{
"role": "user",
"content": "What is the capital of France?"
}
]
)function-gemma-finetuned-tool-call
Fine-tuned Function-Gemma 270M model for bilingual (English/French) tool-calling.
Files
function-gemma-finetuned-tool-call.gguf(F16 merged GGUF)
Base Model
unsloth/functiongemma-270m-it
Training Summary
- Method: SFT + LoRA, then merged into full weights
- Dataset: custom bilingual EN/FR tool-calling set (
dataset_80tools_en_fr.json) - Target behavior: structured function/tool calls with argument extraction and no-tool abstention when appropriate
Local Evaluation (checkpoint benchmark)
From outputs/eval_checkpoint_report.json:
- Total cases: 16
- Pass rate: 0.8125
- Decision accuracy: 0.8125
- Tool name accuracy: 0.8125
- Argument presence accuracy: 1.0
- Tool-call recall: 1.0
- No-tool precision: 0.5
Usage (llama.cpp)
llama.cpp/build/bin/llama-cli \
--model function-gemma-finetuned-tool-call.gguf \
--ctx-size 32768 \
--n-gpu-layers 99 \
--seed 3407 \
--top-k 64 \
--top-p 0.95 \
--temp 1.0 \
--jinja
For one-shot test:
llama.cpp/build/bin/llama-cli \
--model function-gemma-finetuned-tool-call.gguf \
--ctx-size 32768 \
--n-gpu-layers 99 \
--seed 3407 \
--top-k 64 \
--top-p 0.95 \
--temp 1.0 \
--jinja \
--single-turn \
--simple-io \
--prompt "What is the weather in Paris?"
Prompt / Output Format
This model was fine-tuned for Function-Gemma style tool tags (e.g. <start_function_call>...).
When used with --jinja, llama.cpp applies the chat template stored in GGUF metadata.
Limitations
- Small model (270M): can still over-call tools in ambiguous no-tool prompts.
- Best results require strong tool schema prompts and clear user intent.
Intended Use
- Lightweight local assistant prototypes
- Tool-routing and structured argument extraction tasks
- EN/FR bilingual demos and experimentation
- Downloads last month
- 21
We're not able to determine the quantization variants.
Model tree for rleo/function-gemma-finetuned-tool-call
Base model
google/functiongemma-270m-it
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="rleo/function-gemma-finetuned-tool-call", filename="function-gemma-finetuned-tool-call.gguf", )