Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing
    • Website
      • Tasks
      • HuggingChat
      • Collections
      • Languages
      • Organizations
    • Community
      • Blog
      • Posts
      • Daily Papers
      • Learn
      • Discord
      • Forum
      • GitHub
    • Solutions
      • Team & Enterprise
      • Hugging Face PRO
      • Enterprise Support
      • Inference Providers
      • Inference Endpoints
      • Storage Buckets

  • Log In
  • Sign Up
hac 's Collections
RL
Quantitation
Distillation
Attention

RL

updated 1 day ago
Upvote
-

  • E-GRPO: High Entropy Steps Drive Effective Reinforcement Learning for Flow Models

    Paper • 2601.00423 • Published Jan 1 • 11

  • GDPO: Group reward-Decoupled Normalization Policy Optimization for Multi-reward RL Optimization

    Paper • 2601.05242 • Published Jan 8 • 231

  • FP8-RL: A Practical and Stable Low-Precision Stack for LLM Reinforcement Learning

    Paper • 2601.18150 • Published Jan 26 • 9

  • DenseGRPO: From Sparse to Dense Reward for Flow Matching Model Alignment

    Paper • 2601.20218 • Published Jan 28 • 16

  • Flow-GRPO: Training Flow Matching Models via Online RL

    Paper • 2505.05470 • Published May 8, 2025 • 88

  • Unified Personalized Reward Model for Vision Generation

    Paper • 2602.02380 • Published Feb 2 • 20

  • Alleviating Sparse Rewards by Modeling Step-Wise and Long-Term Sampling Effects in Flow-Based GRPO

    Paper • 2602.06422 • Published Feb 6 • 47

  • Flow-OPD: On-Policy Distillation for Flow Matching Models

    Paper • 2605.08063 • Published 8 days ago • 93
Upvote
-
  • Collection guide
  • Browse collections
Company
TOS Privacy About Careers
Website
Models Datasets Spaces Pricing Docs