mkurman commited on
Commit
e41141f
·
verified ·
1 Parent(s): e3c7652

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +73 -0
README.md CHANGED
@@ -1,4 +1,19 @@
1
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  dataset_info:
3
  features:
4
  - name: question
@@ -44,3 +59,61 @@ configs:
44
  - split: test
45
  path: data/test-*
46
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ dataset_name: openmed-community/mmlu-5-options-rl-ready
3
+ tags:
4
+ - MMLU
5
+ - evaluation
6
+ - DPO
7
+ - RL
8
+ - SFT
9
+ pretty_name: MMLU – 5-Options RL-Ready
10
+ license: mit
11
+ language:
12
+ - en
13
+ task_categories:
14
+ - multiple-choice
15
+ - question-answering
16
+ - reinforcement-learning
17
  dataset_info:
18
  features:
19
  - name: question
 
59
  - split: test
60
  path: data/test-*
61
  ---
62
+
63
+ # MMLU – 5-Options RL-Ready
64
+
65
+ **A standardized, RL-friendly remix of MMLU** with explicit negatives and a unified five-option presentation string for each question. Ideal for **DPO** and other RL setups while remaining drop-in for classic multiple-choice evaluation.
66
+
67
+ ## What’s inside
68
+
69
+ * **Splits & size:** ~**97.8k train** + **2k test** ≈ **99.8k total**.
70
+ * **Schema (core fields):**
71
+
72
+ * `question: str`
73
+ * `choices: list[str]` *(canonical options, typically 4 as in original MMLU)*
74
+ * `answer: int` *(0-based index)*
75
+ * `task: str` *(subject/task label; ~55 values)*
76
+ * `output: str` *(correct option text)*
77
+ * `options: str` *(single markdown-style block with **(1)…(5)** enumerated choices for unified 5-option prompts)*
78
+ * `letter: str` *(correct letter tag)*
79
+ * `incorrect_letters: list[str]`
80
+ * `incorrect_answers: list[str]`
81
+ * `single_incorrect_answer: str` *(one negative for pairwise prefs)*
82
+ * `system_prompt: str` *(single default value)*
83
+ * `input: str` *(ready-to-use user message text)*
84
+
85
+ > Note: The dataset provides both the **original structured `choices` array** (as in MMLU) and a **five-option `options` string** for standardized, list-variant prompting in RL pipelines.
86
+
87
+ ## Why it’s RL-ready
88
+
89
+ * **Explicit negatives:** `incorrect_answers` + `single_incorrect_answer` enable **DPO**, pairwise prefs, and contrastive training without extra preprocessing.
90
+ * **Unified prompts:** `system_prompt` + `input` and the five-option `options` string make it simple to build consistent chat-style prompts across frameworks.
91
+
92
+ ## Example record
93
+
94
+ ```json
95
+ {
96
+ "question": "Which statement best describes the critics' reaction to the Segway?",
97
+ "choices": ["Nothing but an electrical device.", "A disappointing engineering mistake.", "An expensive and disappointing invention.", "Disappointing, but still a successful device."],
98
+ "answer": 3,
99
+ "task": "miscellaneous",
100
+ "output": "Disappointing, but still a successful device.",
101
+ "options": "(1) ... (2) ... (3) ... (4) ... (5) ...",
102
+ "letter": "(3)",
103
+ "incorrect_letters": ["(1)", "(2)", "(4)", "(5)"],
104
+ "incorrect_answers": ["...", "...", "...", "..."],
105
+ "single_incorrect_answer": "...",
106
+ "system_prompt": "You are a helpful tutor.",
107
+ "input": "Choose the correct answer from the options below.\n\n<question + (1)…(5) options>"
108
+ }
109
+ ```
110
+
111
+ ## Intended uses
112
+
113
+ * **Evaluation** of general reasoning on MMLU tasks with standardized five-option prompts.
114
+ * **SFT** with chat-style formatting.
115
+ * **DPO / RL** using explicit positive vs. negative pairs from `single_incorrect_answer` or full `incorrect_answers`.
116
+
117
+ ## Source & attribution
118
+
119
+ Derived from the original **MMLU** dataset by Hendrycks et al. (CAIS) [cais/mmlu]([cais/mmlu](https://huggingface.co/datasets/cais/mmlu)). Please cite the original work when using this derivative.