How to use codymd/Llama-3.2-1B-QuestionGen with Transformers:
# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("codymd/Llama-3.2-1B-QuestionGen", dtype="auto")
How to use codymd/Llama-3.2-1B-QuestionGen with Unsloth Studio:
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for codymd/Llama-3.2-1B-QuestionGen to start chatting
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for codymd/Llama-3.2-1B-QuestionGen to start chatting
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for codymd/Llama-3.2-1B-QuestionGen to start chatting
pip install unsloth from unsloth import FastModel model, tokenizer = FastModel.from_pretrained( model_name="codymd/Llama-3.2-1B-QuestionGen", max_seq_length=2048, )
The community tab is the place to discuss and collaborate with the HF community!