Model Description

This model was fine-tuned on Vistral-7B-chat for function calling.

Usage

You can find GGUF model here: https://huggingface.co/hiieu/Vistral-7B-Chat-function-calling-gguf

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('hiieu/Vistral-7B-Chat-function-calling')
model = AutoModelForCausalLM.from_pretrained(
    'hiieu/Vistral-7B-Chat-function-calling',
    torch_dtype=torch.bfloat16, # change to torch.float16 if you're using V100
    device_map="auto",
    use_cache=True,
)

functions_metadata = [
    {
      "type": "function",
      "function": {
        "name": "get_temperature",
        "description": "get temperature of a city",
        "parameters": {
          "type": "object",
          "properties": {
            "city": {
              "type": "string",
              "description": "name"
            }
          },
          "required": [
            "city"
          ]
        }
      }
    }
]

conversation = [
    {"role": "system", "content": f"""BαΊ‘n lΓ  mα»™t trợ lΓ½ hα»―u Γ­ch cΓ³ quyền truy cαΊ­p vΓ o cΓ‘c chα»©c nΔƒng sau. Sα»­ dα»₯ng chΓΊng nαΊΏu cαΊ§n -\n{str(functions_metadata)} Để sα»­ dα»₯ng cΓ‘c chα»©c nΔƒng nΓ y, hΓ£y phαΊ£n hα»“i vα»›i:\n<functioncall> {{\\"name\\": \\"function_name\\", \\"arguments\\": {{\\"arg_1\\": \\"value_1\\", \\"arg_1\\": \\"value_1\\", ...}} }} </functioncall>\n\nTrường hợp Δ‘αΊ·c biệt bαΊ‘n phαΊ£i xα»­ lΓ½:\n - NαΊΏu khΓ΄ng cΓ³ chα»©c nΔƒng nΓ o khα»›p vα»›i yΓͺu cαΊ§u cα»§a người dΓΉng, bαΊ‘n sαΊ½ phαΊ£n hα»“i mα»™t cΓ‘ch lα»‹ch sα»± rαΊ±ng bαΊ‘n khΓ΄ng thể giΓΊp được.""" },
    {"role": "user", "content": "Thời tiαΊΏt ở HΓ  Nα»™i Δ‘ang lΓ  bao nhiΓͺu Δ‘α»™"},
    {"role": "assistant", "content": """<functioncall> {"name": "get_temperature", "arguments": '{"city": "HΓ  Nα»™i"}'} </functioncall>"""},
    {"role": "user", "content": """<function_response> {"temperature" : "20 C"} </function_response>"""},
]

input_ids = tokenizer.apply_chat_template(conversation, return_tensors="pt").to(model.device)

out_ids = model.generate(
    input_ids=input_ids,
    max_new_tokens=768,
    do_sample=True,
    top_p=0.95,
    top_k=40,
    temperature=0.1,
    repetition_penalty=1.05,
)
assistant = tokenizer.batch_decode(out_ids[:, input_ids.size(1): ], skip_special_tokens=True)[0].strip()
print("Assistant: ", assistant) 
# >> Assistant:  Thời tiαΊΏt ở HΓ  Nα»™i hiện tαΊ‘i lΓ  khoαΊ£ng 20 Δ‘α»™ C.

Uploaded model

  • Developed by: hiieu
  • License: apache-2.0
  • Finetuned from model : Viet-Mistral/Vistral-7B-Chat

This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
12
Safetensors
Model size
7B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for hiieu/Vistral-7B-Chat-function-calling

Finetuned
(31)
this model
Quantizations
3 models

Spaces using hiieu/Vistral-7B-Chat-function-calling 13