Byte-Level BPE Tokenizer: ['tur_Latn'] (2K)

A Byte-Level BPE tokenizer trained on ['tur_Latn'] data from Fineweb-2-HQ.

Training Details

Parameter Value
Algorithm Byte-Level BPE
Language ['tur_Latn']
Target Vocab Size 2,000
Final Vocab Size 318
Pre-tokenizer custom:addition
Number handling ltr_3digit
Contraction handling False
Normalizer NFC
Special Tokens <s>, </s>, <pad>, <unk>
Training Shards 2

Usage

from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("flexitok/maddition_tur_Latn_2000")
tokens = tokenizer.encode("Hello, world!")

Files

  • tokenizer.json — Full HuggingFace tokenizer
  • vocab.json — Vocabulary mapping
  • merges.txt — BPE merge rules

Sample Encoding

Text Tokens Token IDs
yirmi iki+dokuz=otuz bir\ntwenty two+nine=thirty one yirmi, Ġ, iki, +, dokuz, =, otuz, Ġ, bir, \, n, t, w, e, n, t, y, Ġ, t, w 311, 223, 276, 13, 286, 31, 310, 223, 307, 62, 80, 86, 89, 71, 80, 86, 91, 223, 86, 89

Command used to create this tokenizer:

['/home/gsa/tokenizers2/flexitok/tokenizer_training/train_tokenizers.py', 'algorithm=bpe', 'vocab_size=2000', 'langs=[tur_Latn]', 'data_dir=/scratch/gsa/data/multilingual-addition/', 'output_dir=/scratch/gsa/trained_tokenizers/multilingual_addition', 'pretokenizer=custom:addition', 'number_handling=ltr_3digit', 'add_numbers=false', 'handle_contractions=false', 'unicode_normalization=nfc', 'use_byte_level_regex=false', 'byte_fallback=false', 'strip_zero_width=false', 'cjk_char_split=false', 'add_cjk_chars=false', 'max_lines=-1', 'test_string=yirmi iki+dokuz=otuz bir\\ntwenty two+nine=thirty one', 'hf.publish_to_hf=true', 'hf_repo_prefix=flexitok/', 'hf.hf_repo_id=flexitok/maddition_tur_Latn_2000', 'hf.collections=[flexitok/multilingual_addition_tokenizers]']
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collections including flexitok/maddition_tur_Latn_2000