Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
6
6
11
Song
Hwanjun
Follow
21world's profile picture
1 follower
ยท
1 following
AI & ML interests
None yet
Recent Activity
liked
a dataset
about 14 hours ago
DISLab/BRIDGE
upvoted
a
paper
7 days ago
Reasoning over Video: Evaluating How MLLMs Extract, Integrate, and Reconstruct Spatiotemporal Evidence
reacted
to
Kseniase
's
post
with ๐
11 months ago
16 new research on inference-time scaling: For the last couple of weeks a large amount of studies on inference-time scaling has emerged. And it's so cool, because each new paper adds a trick to the toolbox, making LLMs more capable without needing to scale parameter count of the models. So here are 13 new methods + 3 comprehensive studies on test-time scaling: 1. https://huggingface.co/papers/2504.02495 Probably, the most popular study. It proposes to boost inference-time scalability by improving reward modeling. To enhance performance, DeepSeek-GRM uses adaptive critiques, parallel sampling, pointwise generative RM, and Self-Principled Critique Tuning (SPCT) 2. https://huggingface.co/papers/2504.04718 Allows small models to use external tools, like code interpreters and calculator, to enhance self-verification 3. https://huggingface.co/papers/2504.00810 Proposes to train LLMs on code-based reasoning paths to make test-time scaling more efficient, limiting unnecessary tokens with a special dataset and a Shifted Thinking Window 4. https://huggingface.co/papers/2504.00891 Introduces GenPRM, a generative PRM, that uses CoT reasoning and code verification for step-by-step judgment. With only 23K training examples, GenPRM outperforms prior PRMs and larger models 5. https://huggingface.co/papers/2503.24320 SWIFT test-time scaling framework improves World Models' performance without retraining, using strategies like fast tokenization, Top-K pruning, and efficient beam search 6. https://huggingface.co/papers/2504.07104 Proposes REBEL for RAG systems scaling, which uses multi-criteria optimization with CoT prompting for better performance-speed tradeoffs as inference compute increases 7. https://huggingface.co/papers/2503.13288 Proposes a ฯ-Decoding strategy that uses foresight sampling, clustering and adaptive pruning to estimate and select optimal reasoning steps Read further below ๐ Also, subscribe to the Turing Post https://www.turingpost.com/subscribe
View all activity
Organizations
Papers
2
arxiv:
2403.03194
arxiv:
2402.13249
models
1
Hwanjun/my_awesome_billsum_model
Updated
Jan 20, 2023
datasets
0
None public yet