PingVortexLM-20M

A small experimental language model based on LLaMA architecture trained on custom high-quality English dataset with around 200M tokens. This model is just an experiment, it is not designed for coherent text generation or logical reasoning and may produce repetitive or nonsensical outputs.

Built by PingVortex Labs.


Model Details

  • Parameters: 20M
  • Context length: 8192 tokens
  • Language: English only
  • License: Apache 2.0

Usage

from transformers import LlamaForCausalLM, PreTrainedTokenizerFast

model = LlamaForCausalLM.from_pretrained("pvlabs/PingVortexLM-20M-v2-Base")
tokenizer = PreTrainedTokenizerFast.from_pretrained("pvlabs/PingVortexLM-20M-v2-Base")

# don't expect a coherent response
prompt = "The capital of France is"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=50, repetition_penalty=1.3)
print(tokenizer.decode(outputs[0]))

Made by PingVortex.

Downloads last month
543
Safetensors
Model size
19.2M params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including pvlabs/PingVortexLM-20M-v2-Base