Instructions to use lizhou21/roberta-large-retacred with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use lizhou21/roberta-large-retacred with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("fill-mask", model="lizhou21/roberta-large-retacred")# Load model directly from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("lizhou21/roberta-large-retacred") model = AutoModelForMaskedLM.from_pretrained("lizhou21/roberta-large-retacred") - Notebooks
- Google Colab
- Kaggle
Model Card for LiLiZhou/roberta-large-retacred
Model Description
Climate performance model card
| LiLiZhou/roberta-large-retacred | |
|---|---|
| 1. Is the resulting model publicly available? | Yes |
| 2. How much time does the training of the final model take? | 1h 34m 16s |
| 3. How much time did all experiments take (incl. hyperparameter search)? | 11d 18h 36m |
| 4. What was the power of GPU and CPU? | 0.26kw |
| 5. At which geo location were the computations performed? | Denmark |
| 6. What was the energy mix at the geo location? | 155.7 to 239 gCO2eq/kWh |
| 7. How much CO2eq was emitted to train the final model? | 63.55674 to 97.5598 kg |
| 8. How much CO2eq was emitted for all experiments? | N/A |
| 9. What is the average CO2eq emission for the inference of one sample? | N/A |
- Downloads last month
- 9