Instructions to use NitzanBar/umls-spanbert with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use NitzanBar/umls-spanbert with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="NitzanBar/umls-spanbert")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("NitzanBar/umls-spanbert") model = AutoModelForSequenceClassification.from_pretrained("NitzanBar/umls-spanbert") - Notebooks
- Google Colab
- Kaggle
Based ob the paper: "UmlsBERT: Augmenting Contextual Embeddings with a Clinical Metathesaurus" (https://aclanthology.org/2021.naacl-main.139.pdf).
and the github repo: https://github.com/gmichalo/UmlsBERT
Changing base model to SpanBert instead of Bert.
Trained from scratch on MIMIC dataset, using the UMLS dataset to mask words within the text.
We achived better accuracy on MedNLI dataset.
Bert Model accuracy: 83%
SpanBert Model accuracy: 86%
- Downloads last month
- 10