bird-of-paradise commited on
Commit
1954d1a
Β·
verified Β·
1 Parent(s): fe9be17

adding additional reference and my current expereiments

Browse files
Files changed (1) hide show
  1. README.md +8 -3
README.md CHANGED
@@ -27,7 +27,7 @@ datasets:
27
  8. [Optimization Techniques](#8-optimization-techniques)
28
  9. [Lessons Learned](#9-lessons-learned-from-implementing-muon)
29
  10. [Concludsion](#10-conclusion)
30
- 10. [Extended Work](#11-extended-work)
31
 
32
  ## πŸ§ͺ Try It Yourself
33
 
@@ -128,6 +128,8 @@ This hybrid approach gets the best of both worlds: Muon's efficient matrix updat
128
  ### Training Stages
129
  - βœ… Pre-training from scratch
130
  - βœ… Domain adaptation
 
 
131
  - ❌ Fine-tuning (low-rank updates like LoRA are preferable)
132
  - ❌ Alignment stages (RLHF/DPO)
133
 
@@ -256,9 +258,12 @@ As you build and train your own models, consider Muon for hidden layer optimizat
256
 
257
 
258
  ## 11. Extended Work
259
- For the distributed (DP Γ— TP) implementation built for CPU/Gloo environments, see:
260
 
261
- [🧩 The "Muon is Scalable" Blueprint: A Distributed Muon Engineering Breakdown (CPU-Friendly, Tutorial Style)](https://huggingface.co/datasets/bird-of-paradise/muon-distributed)
 
 
 
262
 
263
  ---
264
 
 
27
  8. [Optimization Techniques](#8-optimization-techniques)
28
  9. [Lessons Learned](#9-lessons-learned-from-implementing-muon)
29
  10. [Concludsion](#10-conclusion)
30
+ 11. [Extended Work](#11-extended-work)
31
 
32
  ## πŸ§ͺ Try It Yourself
33
 
 
128
  ### Training Stages
129
  - βœ… Pre-training from scratch
130
  - βœ… Domain adaptation
131
+ - πŸ”¬?RL training for math reasoning tasks
132
+ - πŸ‘‰πŸ» I'm currently running experiements to investigate the training dynamicas, stay turned for update!
133
  - ❌ Fine-tuning (low-rank updates like LoRA are preferable)
134
  - ❌ Alignment stages (RLHF/DPO)
135
 
 
258
 
259
 
260
  ## 11. Extended Work
261
+ - For the distributed (DP Γ— TP) implementation built for CPU/Gloo environments, see:
262
 
263
+ [🧩 The "Muon is Scalable" Blueprint: A Distributed Muon Engineering Breakdown (CPU-Friendly, Tutorial Style)](https://huggingface.co/datasets/bird-of-paradise/muon-distributed)
264
+
265
+ - For those if you who are intereted in validation work on Moonshot's "Muon is scalable for LLM trianing", πŸ‘‰πŸ» check out
266
+ [πŸ”¬ Distributed Muon: Field Notes & Reproducibility Artifacts](https://huggingface.co/datasets/bird-of-paradise/muon-distributed-reproducibility)
267
 
268
  ---
269