Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free Lunch
Paper
•
2311.03099
•
Published
•
30
This is a merge of pre-trained language models created using mergekit.
This model was merged using the DARE TIES merge method using unsloth/Meta-Llama-3.1-8B-Instruct as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
dtype: bfloat16
merge_method: dare_ties
slices:
- sources:
- layer_range: [0, 32]
model: akjindal53244/Llama-3.1-Storm-8B
parameters:
density: 0.8
weight: 0.13
- layer_range: [0, 32]
model: arcee-ai/Llama-3.1-SuperNova-Lite
parameters:
density: 1.0
weight: 0.37
- layer_range: [0, 32]
model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
parameters:
density: 1.0
weight: 0.13
- layer_range: [0, 32]
model: NCSOFT/Llama-VARCO-8B-Instruct
parameters:
density: 0.8
weight: 0.37
- layer_range: [0, 32]
model: unsloth/Meta-Llama-3.1-8B-Instruct
tokenizer_source: base
Detailed results can be found here! Summarized results can be found here!
| Metric | Value |
|---|---|
| Avg. | 29.72 |
| IFEval (0-Shot) | 78.38 |
| BBH (3-Shot) | 32.08 |
| MATH Lvl 5 (4-Shot) | 20.02 |
| GPQA (0-shot) | 7.27 |
| MuSR (0-shot) | 9.92 |
| MMLU-PRO (5-shot) | 30.66 |