SentenceTransformer based on isy-thl/multilingual-e5-base-learning-outcome-skill-tuned

This is a sentence-transformers model finetuned from isy-thl/multilingual-e5-base-learning-outcome-skill-tuned. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'XLMRobertaModel'})
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
    'query: Tenses explained. Future tense (will / going to)Present tense (simple / progressive)Past tense (simple / progressive)Present perfect tense (simple / progressive)Past perfect tense (simple / progressive)',
    'passage: languages not further defined.',
    'passage: transport services. Transport is the study of operating, navigating and directing ships, train, aircraft and other forms of transportation. Programmes and qualifications with the following main content are classified here: Aircraft operation Air traffic control Air traffic safety Cabin crew training Communication (air, railway, road etc.) programmes Crane and truck driving Driving programmes Flying and navigation Navigation technologies Postal service Railway operations Road motor vehicle operations Ship operation Shipping Transport studies',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[ 1.0000,  0.4377, -0.0224],
#         [ 0.4377,  1.0000, -0.1168],
#         [-0.0224, -0.1168,  1.0000]])

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.6824
cosine_accuracy@3 0.8471
cosine_accuracy@5 0.9059
cosine_accuracy@10 0.9529
cosine_precision@1 0.6824
cosine_precision@3 0.2824
cosine_precision@5 0.1812
cosine_precision@10 0.0953
cosine_recall@1 0.6824
cosine_recall@3 0.8471
cosine_recall@5 0.9059
cosine_recall@10 0.9529
cosine_ndcg@5 0.8018
cosine_ndcg@10 0.8165
cosine_mrr@10 0.7728
cosine_map@100 0.7753

Training Details

Training Dataset

Unnamed Dataset

  • Size: 340 training samples
  • Columns: anchor and positive
  • Approximate statistics based on the first 340 samples:
    anchor positive
    type string string
    details
    • min: 14 tokens
    • mean: 166.05 tokens
    • max: 336 tokens
    • min: 9 tokens
    • mean: 77.6 tokens
    • max: 250 tokens
  • Samples:
    anchor positive
    query: Digitalisierung am Beispiel Smart Home. In diesem Kurs erfahren Sie, was ein Smart Home ausmacht, welche Wünsche und Ziele ein Benutzer üblicherweise mit dem Smart Home verbindet und was davon mit aktuellen Produkten schon realisierbar ist. Darüber hinaus bekommen Sie Hintergrundwissen über die technischen Herausforderungen und verschiedene Lösungsansätze mit ihren Vor- und Nachteilen. Dazu

    gehört auch eine kritische Betrachtung von Sicherheitslücken und Datenschutz,

    sowie Vendor Lock-In. Schließlich können Sie entdecken, was Smart Home mit Digitalisierung zu tun hat, wie Digitalisierung in anderen Branchen schon gewirkt hat und was für Auswirkungen in Branchen zu erwarten sind, wo die Digitalisierung gerade erst anfängt.Hier geht's zum Kursportrait in der OPEN vhb-Themenwelt: https://open.vhb.org/themenwelt/kursportraits/smart-home/
    passage: information and communication technologies not elsewhere classified. Information technology studies not fitting in the detailed fields are classified here: Artificial intelligence
    query: BHT Kurscontainer 3 . Dies ist der Kurscontainer für Gruppe 3. passage: business and administration not further defined.
    query: Prompt-Labor Hochschullehre – Anwendungen. Der erfolgreiche Grundlagenkurs Prompt-Labor findet hier seine Fortsetzung mit neuen anwendungsorientierten Einheiten rund um das Thema Generative KI in der Hochschullehre . Freuen Sie sich auf spannende Beiträge und neue Gesichter im Kurs Prompt-Labor Anwendungen! passage: education science. Education science is the study of the learning process and the theories, methods and techniques of imparting knowledge to others. Programmes and qualifications with the following main content are classified here: Curriculum studies Didactics Educational assessment, testing and measurement Educational evaluation and research Paedagogical sciences
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim",
        "gather_across_devices": false
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 16
  • gradient_accumulation_steps: 2
  • learning_rate: 2e-05
  • weight_decay: 0.01
  • num_train_epochs: 12
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • save_only_model: True
  • fp16: True

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: no
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 8
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 2
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.01
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 12
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: True
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Epoch Step Training Loss iscedf_cosine_ndcg@10
1 10 - 0.6893
3 20 - 0.7603
4 30 - 0.7893
6 40 - 0.8048
8 50 1.1322 0.8110
9 60 - 0.8165

Framework Versions

  • Python: 3.11.13
  • Sentence Transformers: 5.1.1
  • Transformers: 4.52.3
  • PyTorch: 2.6.0+cu124
  • Accelerate: 1.9.0
  • Datasets: 4.4.1
  • Tokenizers: 0.21.2

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
15
Safetensors
Model size
0.3B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for maxHPI90/multilingual-e5-base-iscedf-01

Papers for maxHPI90/multilingual-e5-base-iscedf-01

Evaluation results