A curated collection of datasets in French, designed to evaluate biases in LLMs
Irina Proskurina PRO
iproskurina
AI & ML interests
LLMs: quantization, pre-training
Recent Activity
updated a model 5 days ago
iproskurina/Mistral-7B-Instruct-v0.3-int4-f-vf-alpha05 published a model 5 days ago
iproskurina/Mistral-7B-Instruct-v0.3-int4-f-vf-alpha05 updated a model 5 days ago
iproskurina/Mistral-7B-Instruct-v0.3-int4-f-vf-alpha005-upperOrganizations
LMs + Topological Data Analysis🌌
Attention graph features extracted from LMs fine-tuned on linguistic acceptability corpora
-
iproskurina/tda-ruroberta-large-ru-cola
Text Classification • 0.4B • Updated • 5 -
iproskurina/tda-bert-en-cola
Text Classification • 0.1B • Updated • 10 -
iproskurina/tda-roberta-large-en-cola
Text Classification • 0.4B • Updated • 17 -
iproskurina/tda-rubert-ru-cola
Text Classification • 0.2B • Updated • 4
BabyLMs 🧸
A collection of models submitted to the BabyLM23 competition
French Bias & Ethics Benchmarking Suite
A curated collection of datasets in French, designed to evaluate biases in LLMs
Quantized LLMs with GPTQ
LLMs quantized with GPTQ
LMs + Topological Data Analysis🌌
Attention graph features extracted from LMs fine-tuned on linguistic acceptability corpora
-
iproskurina/tda-ruroberta-large-ru-cola
Text Classification • 0.4B • Updated • 5 -
iproskurina/tda-bert-en-cola
Text Classification • 0.1B • Updated • 10 -
iproskurina/tda-roberta-large-en-cola
Text Classification • 0.4B • Updated • 17 -
iproskurina/tda-rubert-ru-cola
Text Classification • 0.2B • Updated • 4
LMs for French 🥐
BabyLMs 🧸
A collection of models submitted to the BabyLM23 competition