Model Card for Model ID

Trained on first 20000 rows of TR Subset

Inference Code

import torch
from unsloth import FastLanguageModel
from transformers import TextStreamer

def run_inference():
    model_id = "uisikdag/qwen3-4b-instruct-culturay"
    print(f"Loading model from Hub: {model_id}")

    # Load model and tokenizer from Hub
    model, tokenizer = FastLanguageModel.from_pretrained(
        model_name = model_id,
        max_seq_length = 2048,
        load_in_4bit = True,
    )
    FastLanguageModel.for_inference(model) # Enable native 2x faster inference

    messages = [
        {"role" : "user", "content" : "Soğuk havalarda nasıl giyinmeli?"}
    ]
    text = tokenizer.apply_chat_template(
        messages,
        tokenize = False,
        add_generation_prompt = True,
    )

    print("Generating response...")
    _ = model.generate(
        **tokenizer(text, return_tensors = "pt").to("cuda"),
        max_new_tokens = 1000,
        temperature = 0.7, top_p = 0.8, top_k = 20,
        streamer = TextStreamer(tokenizer, skip_prompt = True),
    )

if __name__ == "__main__":
    run_inference()

Framework versions

  • PEFT 0.18.0
Downloads last month
36
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for uisikdag/qwen3-4b-instruct-culturay

Base model

Qwen/Qwen3-4B-Base
Finetuned
Qwen/Qwen3-4B
Adapter
(18)
this model

Dataset used to train uisikdag/qwen3-4b-instruct-culturay