Datasets:
adding additional reference and my current expereiments
Browse files
README.md
CHANGED
|
@@ -27,7 +27,7 @@ datasets:
|
|
| 27 |
8. [Optimization Techniques](#8-optimization-techniques)
|
| 28 |
9. [Lessons Learned](#9-lessons-learned-from-implementing-muon)
|
| 29 |
10. [Concludsion](#10-conclusion)
|
| 30 |
-
|
| 31 |
|
| 32 |
## π§ͺ Try It Yourself
|
| 33 |
|
|
@@ -128,6 +128,8 @@ This hybrid approach gets the best of both worlds: Muon's efficient matrix updat
|
|
| 128 |
### Training Stages
|
| 129 |
- β
Pre-training from scratch
|
| 130 |
- β
Domain adaptation
|
|
|
|
|
|
|
| 131 |
- β Fine-tuning (low-rank updates like LoRA are preferable)
|
| 132 |
- β Alignment stages (RLHF/DPO)
|
| 133 |
|
|
@@ -256,9 +258,12 @@ As you build and train your own models, consider Muon for hidden layer optimizat
|
|
| 256 |
|
| 257 |
|
| 258 |
## 11. Extended Work
|
| 259 |
-
For the distributed (DP Γ TP) implementation built for CPU/Gloo environments, see:
|
| 260 |
|
| 261 |
-
[π§© The "Muon is Scalable" Blueprint: A Distributed Muon Engineering Breakdown (CPU-Friendly, Tutorial Style)](https://huggingface.co/datasets/bird-of-paradise/muon-distributed)
|
|
|
|
|
|
|
|
|
|
| 262 |
|
| 263 |
---
|
| 264 |
|
|
|
|
| 27 |
8. [Optimization Techniques](#8-optimization-techniques)
|
| 28 |
9. [Lessons Learned](#9-lessons-learned-from-implementing-muon)
|
| 29 |
10. [Concludsion](#10-conclusion)
|
| 30 |
+
11. [Extended Work](#11-extended-work)
|
| 31 |
|
| 32 |
## π§ͺ Try It Yourself
|
| 33 |
|
|
|
|
| 128 |
### Training Stages
|
| 129 |
- β
Pre-training from scratch
|
| 130 |
- β
Domain adaptation
|
| 131 |
+
- π¬?RL training for math reasoning tasks
|
| 132 |
+
- ππ» I'm currently running experiements to investigate the training dynamicas, stay turned for update!
|
| 133 |
- β Fine-tuning (low-rank updates like LoRA are preferable)
|
| 134 |
- β Alignment stages (RLHF/DPO)
|
| 135 |
|
|
|
|
| 258 |
|
| 259 |
|
| 260 |
## 11. Extended Work
|
| 261 |
+
- For the distributed (DP Γ TP) implementation built for CPU/Gloo environments, see:
|
| 262 |
|
| 263 |
+
[π§© The "Muon is Scalable" Blueprint: A Distributed Muon Engineering Breakdown (CPU-Friendly, Tutorial Style)](https://huggingface.co/datasets/bird-of-paradise/muon-distributed)
|
| 264 |
+
|
| 265 |
+
- For those if you who are intereted in validation work on Moonshot's "Muon is scalable for LLM trianing", ππ» check out
|
| 266 |
+
[π¬ Distributed Muon: Field Notes & Reproducibility Artifacts](https://huggingface.co/datasets/bird-of-paradise/muon-distributed-reproducibility)
|
| 267 |
|
| 268 |
---
|
| 269 |
|