Update README.md
Browse files
README.md
CHANGED
|
@@ -40,9 +40,9 @@ The model was fine-tuned on a proprietary dataset from OpenVoid, featuring high-
|
|
| 40 |
- **HumanEval v1.0**: pass@1: 0.561
|
| 41 |
- **EvalPlus v1.1**: pass@1: 0.500
|
| 42 |
- **AGIEval**: 40.74
|
| 43 |
-
- **GPT4All
|
| 44 |
-
- **TruthfulQA
|
| 45 |
-
- **Bigbench
|
| 46 |
- **Average**: 51.55
|
| 47 |
|
| 48 |
## How to Use the Model
|
|
|
|
| 40 |
- **HumanEval v1.0**: pass@1: 0.561
|
| 41 |
- **EvalPlus v1.1**: pass@1: 0.500
|
| 42 |
- **AGIEval**: 40.74
|
| 43 |
+
- **GPT4All**: 70.17
|
| 44 |
+
- **TruthfulQA**: 51.15
|
| 45 |
+
- **Bigbench**: 44.12
|
| 46 |
- **Average**: 51.55
|
| 47 |
|
| 48 |
## How to Use the Model
|