Datasets:
Align annotator terminology to expert/non_expert
Browse files
README.md
CHANGED
|
@@ -38,8 +38,8 @@ This release uses one main clip table in `data/metadata.jsonl`.
|
|
| 38 |
## Summary
|
| 39 |
|
| 40 |
- 11.712 total hours of audio, about 10.039 seconds per clip on average.
|
| 41 |
-
- There are 9
|
| 42 |
-
- Rating rows by rater type:
|
| 43 |
- Each rating row contains 5 integer scores from 1 to 10.
|
| 44 |
|
| 45 |
## Main Columns
|
|
@@ -47,8 +47,8 @@ This release uses one main clip table in `data/metadata.jsonl`.
|
|
| 47 |
- `file_name`, `wav_name`, `prompt_id`, `prompt_text`
|
| 48 |
- `scene_category`, `sound_event_count`, `audioset_ontology`
|
| 49 |
- `system_id`, `system_name`
|
| 50 |
-
- `
|
| 51 |
-
- `
|
| 52 |
|
| 53 |
The five evaluation dimensions are `production_complexity`, `content_enjoyment`, `production_quality`, `textual_alignment`, and `content_usefulness`.
|
| 54 |
|
|
|
|
| 38 |
## Summary
|
| 39 |
|
| 40 |
- 11.712 total hours of audio, about 10.039 seconds per clip on average.
|
| 41 |
+
- There are 9 non-expert raters and 3 expert raters.
|
| 42 |
+
- Rating rows by rater type: non_expert=12600, expert=12600.
|
| 43 |
- Each rating row contains 5 integer scores from 1 to 10.
|
| 44 |
|
| 45 |
## Main Columns
|
|
|
|
| 47 |
- `file_name`, `wav_name`, `prompt_id`, `prompt_text`
|
| 48 |
- `scene_category`, `sound_event_count`, `audioset_ontology`
|
| 49 |
- `system_id`, `system_name`
|
| 50 |
+
- `non_expert_*_mean`, `expert_*_mean`
|
| 51 |
+
- `non_expert_*_raw_scores`, `expert_*_raw_scores`
|
| 52 |
|
| 53 |
The five evaluation dimensions are `production_complexity`, `content_enjoyment`, `production_quality`, `textual_alignment`, and `content_usefulness`.
|
| 54 |
|