PingVortexLM-20M
Collection
Series of our 20M parameter PingVortexLM models • 2 items • Updated • 1
A small experimental language model based on LLaMA architecture trained on custom high-quality English dataset with around 200M tokens. This model is just an experiment, it is not designed for coherent text generation or logical reasoning and may produce repetitive or nonsensical outputs.
Built by PingVortex Labs.
from transformers import LlamaForCausalLM, PreTrainedTokenizerFast
model = LlamaForCausalLM.from_pretrained("pvlabs/PingVortexLM-20M-v2-Base")
tokenizer = PreTrainedTokenizerFast.from_pretrained("pvlabs/PingVortexLM-20M-v2-Base")
# don't expect a coherent response
prompt = "The capital of France is"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=50, repetition_penalty=1.3)
print(tokenizer.decode(outputs[0]))
Made by PingVortex.