SentenceTransformer based on ibm-granite/granite-embedding-english-r2

This is a sentence-transformers model finetuned from ibm-granite/granite-embedding-english-r2. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: ibm-granite/granite-embedding-english-r2
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'ModernBertModel'})
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("shatonix/granite-embedding-math-cs")
# Run inference
sentences = [
    'Calculate $(-1)^{47} + 2^{(3^3+4^2-6^2)}$.',
    'Context: \nAnswer: 127',
    '4750',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.7084, 0.1913],
#         [0.7084, 1.0000, 0.2200],
#         [0.1913, 0.2200, 1.0000]])

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.604
cosine_accuracy@3 0.666
cosine_accuracy@5 0.688
cosine_accuracy@10 0.716
cosine_precision@1 0.604
cosine_precision@3 0.222
cosine_precision@5 0.1376
cosine_precision@10 0.0716
cosine_recall@1 0.604
cosine_recall@3 0.666
cosine_recall@5 0.688
cosine_recall@10 0.716
cosine_ndcg@10 0.659
cosine_mrr@10 0.641
cosine_map@100 0.6484

Information Retrieval

Metric Value
cosine_accuracy@1 0.604
cosine_accuracy@3 0.666
cosine_accuracy@5 0.694
cosine_accuracy@10 0.72
cosine_precision@1 0.604
cosine_precision@3 0.222
cosine_precision@5 0.1388
cosine_precision@10 0.072
cosine_recall@1 0.604
cosine_recall@3 0.666
cosine_recall@5 0.694
cosine_recall@10 0.72
cosine_ndcg@10 0.6608
cosine_mrr@10 0.6421
cosine_map@100 0.6495

Information Retrieval

Metric Value
cosine_accuracy@1 0.61
cosine_accuracy@3 0.672
cosine_accuracy@5 0.69
cosine_accuracy@10 0.72
cosine_precision@1 0.61
cosine_precision@3 0.224
cosine_precision@5 0.138
cosine_precision@10 0.072
cosine_recall@1 0.61
cosine_recall@3 0.672
cosine_recall@5 0.69
cosine_recall@10 0.72
cosine_ndcg@10 0.6633
cosine_mrr@10 0.6454
cosine_map@100 0.6531

Information Retrieval

Metric Value
cosine_accuracy@1 0.612
cosine_accuracy@3 0.67
cosine_accuracy@5 0.69
cosine_accuracy@10 0.712
cosine_precision@1 0.612
cosine_precision@3 0.2233
cosine_precision@5 0.138
cosine_precision@10 0.0712
cosine_recall@1 0.612
cosine_recall@3 0.67
cosine_recall@5 0.69
cosine_recall@10 0.712
cosine_ndcg@10 0.6612
cosine_mrr@10 0.645
cosine_map@100 0.652

Information Retrieval

Metric Value
cosine_accuracy@1 0.602
cosine_accuracy@3 0.656
cosine_accuracy@5 0.68
cosine_accuracy@10 0.722
cosine_precision@1 0.602
cosine_precision@3 0.2187
cosine_precision@5 0.136
cosine_precision@10 0.0722
cosine_recall@1 0.602
cosine_recall@3 0.656
cosine_recall@5 0.68
cosine_recall@10 0.722
cosine_ndcg@10 0.6584
cosine_mrr@10 0.6386
cosine_map@100 0.6448

Training Details

Training Dataset

Unnamed Dataset

  • Size: 4,500 training samples
  • Columns: anchor, positive, and id
  • Approximate statistics based on the first 1000 samples:
    anchor positive id
    type string string string
    details
    • min: 8 tokens
    • mean: 80.08 tokens
    • max: 512 tokens
    • min: 9 tokens
    • mean: 165.53 tokens
    • max: 512 tokens
    • min: 3 tokens
    • mean: 3.81 tokens
    • max: 4 tokens
  • Samples:
    anchor positive id
    Stella’s antique shop has 3 dolls, 2 clocks and 5 glasses for sale. She sells the dolls for $5 each. The clocks are priced at $15 each. The glasses are priced at $4 each. If she spent $40 to buy everything and she sells all of her merchandise, how much profit will she make? Context:
    Answer: 25
    3430
    You are tasked with creating a Ruby program that defines a service for creating a project in a Continuous Integration (CI) system. The service should be able to execute with valid parameters and handle specific scenarios.

    The program should include the following:
    - A class called Ci::CreateProjectService that defines the service for creating a project.
    - A method within the Ci::CreateProjectService class called execute that takes in three parameters: current_user (representing the current user), project (representing the project to be created), and ci_origin_project (optional, representing the project to use as a template for settings and jobs).
    - The execute method should handle the following scenarios:
    1. When executed with valid parameters, it should return a new instance of Ci::Project that is persisted.
    2. When executed without a project dump (empty string), it should raise an exception.
    3. When executed with a ci_origin_project for forking, it should use ...
    Context:
    Answer: ruby<br>class Ci::CreateProjectService<br> def execute(current_user, project, ci_origin_project = nil)<br> if project.empty?<br> raise StandardError, 'Project dump is required'<br> end<br><br> new_project = Ci::Project.new<br> new_project.save<br><br> if ci_origin_project<br> new_project.shared_runners_enabled = ci_origin_project.shared_runners_enabled<br> new_project.public = ci_origin_project.public<br> new_project.allow_git_fetch = ci_origin_project.allow_git_fetch<br> end<br><br> new_project<br> end<br>end<br>
    656
    Why is the Insertion Sort algorithm considered optimal for nearly sorted datasets, and how does its time complexity compare to other quadratic sorting algorithms? Context:
    Answer: Insertion Sort operates in O(n²) time complexity in the worst case, but for nearly sorted datasets, it achieves O(n) time complexity. This is because it only requires a minimal number of swaps to place elements in order. For datasets where most elements are already in their correct positions, the number of inversions (pairs out of order) is small, reducing the number of comparisons and swaps. This contrasts with other quadratic algorithms like Selection Sort, which must scan the entire dataset for each element, leading to O(n²) operations regardless of initial order. The efficiency of Insertion Sort for nearly sorted data stems from its ability to leverage existing order, making it a better choice for such scenarios.
    1305
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • learning_rate: 2e-05
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • fp16: True
  • load_best_model_at_end: True
  • gradient_checkpointing: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 3
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • parallelism_config: None
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • project: huggingface
  • trackio_space_id: trackio
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: True
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: no
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: True
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Epoch Step Training Loss dim_768_cosine_ndcg@10 dim_512_cosine_ndcg@10 dim_256_cosine_ndcg@10 dim_128_cosine_ndcg@10 dim_64_cosine_ndcg@10
-1 -1 - 0.6213 0.6214 0.6163 0.6036 0.5899
0.0709 10 7.9281 - - - - -
0.1418 20 7.4864 - - - - -
0.2128 30 5.7244 - - - - -
0.2837 40 5.5573 - - - - -
0.3546 50 4.4921 - - - - -
0.4255 60 4.7436 - - - - -
0.4965 70 4.4213 - - - - -
0.5674 80 4.26 - - - - -
0.6383 90 4.3477 - - - - -
0.7092 100 5.3008 - - - - -
0.7801 110 4.8522 - - - - -
0.8511 120 4.116 - - - - -
0.9220 130 4.3905 - - - - -
0.9929 140 4.6642 - - - - -
1.0 141 - 0.6465 0.6459 0.6489 0.6534 0.6513
1.0638 150 3.6441 - - - - -
1.1348 160 3.7862 - - - - -
1.2057 170 3.8553 - - - - -
1.2766 180 4.1245 - - - - -
1.3475 190 3.2211 - - - - -
1.4184 200 3.6225 - - - - -
1.4894 210 3.2978 - - - - -
1.5603 220 4.1481 - - - - -
1.6312 230 3.7347 - - - - -
1.7021 240 3.3605 - - - - -
1.7730 250 4.1893 - - - - -
1.8440 260 3.0874 - - - - -
1.9149 270 3.6089 - - - - -
1.9858 280 3.2254 - - - - -
2.0 282 - 0.6603 0.6575 0.6623 0.6604 0.6595
2.0567 290 2.699 - - - - -
2.1277 300 3.1953 - - - - -
2.1986 310 2.6364 - - - - -
2.2695 320 3.7468 - - - - -
2.3404 330 2.355 - - - - -
2.4113 340 2.6586 - - - - -
2.4823 350 2.7598 - - - - -
2.5532 360 2.846 - - - - -
2.6241 370 2.7356 - - - - -
2.6950 380 2.4392 - - - - -
2.7660 390 3.1543 - - - - -
2.8369 400 2.6799 - - - - -
2.9078 410 2.657 - - - - -
2.9787 420 2.395 - - - - -
3.0 423 - 0.659 0.6608 0.6633 0.6612 0.6584
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.12.12
  • Sentence Transformers: 5.2.0
  • Transformers: 4.57.3
  • PyTorch: 2.9.1+cu128
  • Accelerate: 1.12.0
  • Datasets: 4.4.2
  • Tokenizers: 0.22.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
52
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for shatonix/granite-embedding-math-cs

Finetuned
(2)
this model

Evaluation results