Dataset Viewer
Auto-converted to Parquet Duplicate
paper_id
stringlengths
5
5
paper_title
stringlengths
8
128
stem
stringlengths
20
126
pdf
unknown
methodology_text
stringlengths
0
5k
num_images
int32
0
178
images
images listlengths
0
178
captions
listlengths
0
178
bboxes
listlengths
0
178
page_indices
listlengths
0
178
content_list
large_stringlengths
67k
1.35M
markdown
large_stringlengths
46.1k
861k
00013
Differentiable Sparsity via $D$ -Gating: Simple and Versatile Structured Penalization
00013_Differentiable_Sparsity_via_D-Gating_Simple_and_Versatile_Structured_Penalization
"JVBERi0xLjUKJb/3ov4KMTk2MyAwIG9iago8PCAvTGluZWFyaXplZCAxIC9MIDE0MTg5MjEgL0ggWyAxMjIxMCAxMjE5IF0gL08(...TRUNCATED)
"Inspired by prior work on differentiable sparse regularization, we propose a new approach called $D(...TRUNCATED)
22
[{"src":"https://huggingface.co/proxy/datasets-server.huggingface.co/assets/Samarth0710/neurips2025-papers/--/{dataset_gi(...TRUNCATED)
["Figure 1: Parameter trajectories for a two-feature squared loss toy objective with non-convex $\\b(...TRUNCATED)
["[258, 90, 741, 222]","[178, 92, 821, 215]","[178, 90, 823, 193]","[179, 94, 818, 198]","[176, 661,(...TRUNCATED)
[ 1, 2, 5, 6, 6, 7, 8, 8, 24, 24, 25, 25, 26, 26, 26, 27, 27, 27, 28, 29, 29, 30 ]
"[\n {\n \"type\": \"text\",\n \"text\": \"Differentiable Sparsity via $D$ -Gating: Simple an(...TRUNCATED)
"# Differentiable Sparsity via $D$ -Gating: Simple and Versatile Structured Penalization\n\nChris $\(...TRUNCATED)
00028
ReSim: Reliable World Simulation for Autonomous Driving
00028_ReSim_Reliable_World_Simulation_for_Autonomous_Driving
"JVBERi0xLjUKJb/3ov4KMTUyNyAwIG9iago8PCAvTGluZWFyaXplZCAxIC9MIDg5NDg1MTcgL0ggWyAyOTQxIDU3OSBdIC9PIDE(...TRUNCATED)
"Basics. ReSim is built on CogVideoX [30], a high-capacity diffusion transformer originally conditio(...TRUNCATED)
18
[{"src":"https://huggingface.co/proxy/datasets-server.huggingface.co/assets/Samarth0710/neurips2025-papers/--/{dataset_gi(...TRUNCATED)
["Figure 1: Overview of ReSim. (a) Heterogeneous driving data includes (i,ii) experts’ safe drivin(...TRUNCATED)
["[179, 87, 821, 404]","[478, 239, 816, 376]","[173, 707, 818, 869]","[181, 94, 464, 200]","[174, 89(...TRUNCATED)
[ 1, 4, 4, 5, 6, 6, 7, 8, 8, 8, 25, 27, 29, 29, 30, 30, 31, 31 ]
"[\n {\n \"type\": \"text\",\n \"text\": \"ReSim: Reliable World Simulation for Autonomous Dr(...TRUNCATED)
"# ReSim: Reliable World Simulation for Autonomous Driving\n\nJiazhi Yang1,3 Kashyap Chitta4,7 Sheny(...TRUNCATED)
00066
Do-PFN: In-Context Learning for Causal Effect Estimation
00066_Do-PFN_In-Context_Learning_for_Causal_Effect_Estimation
"JVBERi0xLjUKJb/3ov4KOTIyIDAgb2JqCjw8IC9MaW5lYXJpemVkIDEgL0wgNTQ5NTg2MCAvSCBbIDI3NTUgNzg2IF0gL08gOTI(...TRUNCATED)
"1. Do-PFN: We propose Do-PFN, a foundation model pre-trained on data from structural causal models (...TRUNCATED)
22
[{"src":"https://huggingface.co/proxy/datasets-server.huggingface.co/assets/Samarth0710/neurips2025-papers/--/{dataset_gi(...TRUNCATED)
["Figure 1: Do-PFN overview: Do-PFN performs in-context learning (ICL) for causal effect estimation,(...TRUNCATED)
["[194, 92, 800, 300]","[173, 87, 825, 178]","[174, 85, 825, 338]","[174, 89, 823, 219]","[174, 753,(...TRUNCATED)
[ 1, 6, 7, 8, 8, 9, 23, 23, 24, 25, 25, 26, 27, 27, 27, 28, 28, 29, 30, 30, 30, 30 ]
"[\n {\n \"type\": \"text\",\n \"text\": \"Do-PFN: In-Context Learning for Causal Effect Esti(...TRUNCATED)
"# Do-PFN: In-Context Learning for Causal Effect Estimation\n\nJake Robertson1,2,3∗ Arik Reuter4,5(...TRUNCATED)
00093
"$\\Psi$ -Sampler: Initial Particle Sampling for SMC-Based Inference-Time Reward Alignment in Score (...TRUNCATED)
"00093_Psi-Sampler_Initial_Particle_Sampling_for_SMC-Based_Inference-Time_Reward_Alignment_in_Score_(...TRUNCATED)
"JVBERi0xLjUKJb/3ov4KMTE0NiAwIG9iago8PCAvTGluZWFyaXplZCAxIC9MIDQxMDM3MDkwIC9IIFsgMjg0MyA4MDYgXSAvTyA(...TRUNCATED)
"Taehoon Yoon∗ Yunhong Min∗ Kyeongmin Yeo∗ Minhyuk Sung KAIST {taehoon,dbsghd363,aaaaa,mhsung}(...TRUNCATED)
16
[{"src":"https://huggingface.co/proxy/datasets-server.huggingface.co/assets/Samarth0710/neurips2025-papers/--/{dataset_gi(...TRUNCATED)
["Figure 1: Toy sampling–method comparison. Each panel visualizes both the initial samples (blue) (...TRUNCATED)
["[176, 85, 823, 196]","[174, 154, 831, 827]","[176, 88, 823, 181]","[174, 569, 823, 844]","[178, 11(...TRUNCATED)
[ 6, 8, 9, 28, 31, 33, 33, 33, 33, 33, 33, 33, 33, 34, 35, 36 ]
"[\n {\n \"type\": \"text\",\n \"text\": \"$\\\\Psi$ -Sampler: Initial Particle Sampling for (...TRUNCATED)
"# $\\Psi$ -Sampler: Initial Particle Sampling for SMC-Based Inference-Time Reward Alignment in Scor(...TRUNCATED)
00106
What Makes a Reward Model a Good Teacher? An Optimization Perspective
00106_What_Makes_a_Reward_Model_a_Good_Teacher_An_Optimization_Perspective
"JVBERi0xLjUKJdDUxdgKMTk0IDAgb2JqCjw8Ci9MZW5ndGggMzY0MiAgICAgIAovRmlsdGVyIC9GbGF0ZURlY29kZQo+PgpzdHJ(...TRUNCATED)
"Noam Razin, Zixuan Wang, Hubert Strauss, Stanley Wei, Jason D. Lee, Sanjeev Arora \nPrinceton Langu(...TRUNCATED)
14
[{"src":"https://huggingface.co/proxy/datasets-server.huggingface.co/assets/Samarth0710/neurips2025-papers/--/{dataset_gi(...TRUNCATED)
["Figure 1: Illustration of how accuracy (Definition 1) and reward variance (Definition 2) affect th(...TRUNCATED)
["[178, 80, 821, 297]","[176, 282, 816, 431]","[176, 308, 818, 444]","[176, 94, 816, 244]","[176, 50(...TRUNCATED)
[ 1, 7, 8, 53, 53, 53, 54, 54, 55, 56, 56, 57, 57, 57 ]
"[\n {\n \"type\": \"text\",\n \"text\": \"What Makes a Reward Model a Good Teacher? An Optim(...TRUNCATED)
"# What Makes a Reward Model a Good Teacher? An Optimization Perspective\n\nNoam Razin, Zixuan Wang,(...TRUNCATED)
00189
Q-Insight: Understanding Image Quality via Visual Reinforcement Learning
00189_Q-Insight_Understanding_Image_Quality_via_Visual_Reinforcement_Learning
"JVBERi0xLjUKJb/3ov4KNjUxIDAgb2JqCjw8IC9MaW5lYXJpemVkIDEgL0wgMjY0NDUzNyAvSCBbIDI3MDEgNDY2IF0gL08gNjU(...TRUNCATED)
"Group Relative Policy Optimization (GRPO) is an innovative reinforcement learning paradigm that has(...TRUNCATED)
23
[{"src":"https://huggingface.co/proxy/datasets-server.huggingface.co/assets/Samarth0710/neurips2025-papers/--/{dataset_gi(...TRUNCATED)
["Figure 1: PLCC comparisons between our proposed Q-Insight and existing IQA metrics (left) and thre(...TRUNCATED)
["[173, 89, 820, 222]","[178, 87, 821, 275]","[173, 88, 377, 324]","[759, 215, 821, 266]","[191, 349(...TRUNCATED)
[ 1, 3, 6, 6, 7, 8, 8, 17, 17, 17, 17, 17, 17, 17, 17, 17, 18, 18, 18, 18, 18, 18, 18 ]
"[\n {\n \"type\": \"text\",\n \"text\": \"Q-Insight: Understanding Image Quality via Visual (...TRUNCATED)
"# Q-Insight: Understanding Image Quality via Visual Reinforcement Learning\n\nWeiqi ${ \\bf { U } }(...TRUNCATED)
00232
GraphMaster: Automated Graph Synthesis via LLM Agents in Data-Limited Environments
00232_GraphMaster_Automated_Graph_Synthesis_via_LLM_Agents_in_Data-Limited_Environments
"JVBERi0xLjUKJb/3ov4KMjU0NSAwIG9iago8PCAvTGluZWFyaXplZCAxIC9MIDMwNTgyNjQgL0ggWyAzNTUxIDExMjMgXSAvTyA(...TRUNCATED)
"Traditional graph data synthesis methods [7] address data scarcity through various approaches. Edge(...TRUNCATED)
10
[{"src":"https://huggingface.co/proxy/datasets-server.huggingface.co/assets/Samarth0710/neurips2025-papers/--/{dataset_gi(...TRUNCATED)
["Figure 1: GraphMaster: A hierarchical multi-agent framework for text-attributed graph synthesis. "(...TRUNCATED)
["[174, 88, 825, 265]","[184, 236, 815, 367]","[196, 88, 802, 224]","[181, 409, 816, 669]","[183, 85(...TRUNCATED)
[ 3, 7, 8, 28, 29, 29, 30, 30, 31, 43 ]
"[\n {\n \"type\": \"text\",\n \"text\": \"GraphMaster: Automated Graph Synthesis via LLM Age(...TRUNCATED)
"# GraphMaster: Automated Graph Synthesis via LLM Agents in Data-Limited Environments\n\nEnjun $\\ma(...TRUNCATED)
00251
BIOCLIP 2: Emergent Properties from Scaling Hierarchical Contrastive Learning
00251_BioCLIP_2_Emergent_Properties_from_Scaling_Hierarchical_Contrastive_Learning
"JVBERi0xLjUKJb/3ov4KMTI1OSAwIG9iago8PCAvTGluZWFyaXplZCAxIC9MIDEzMDgwMzQxIC9IIFsgMzA5NSA4ODEgXSAvTyA(...TRUNCATED)
15
[{"src":"https://huggingface.co/proxy/datasets-server.huggingface.co/assets/Samarth0710/neurips2025-papers/--/{dataset_gi(...TRUNCATED)
["Figure 1: While BIOCLIP 2 is trained to distinguish species, it demonstrates emergent properties b(...TRUNCATED)
["[174, 348, 820, 493]","[174, 90, 816, 224]","[176, 299, 816, 468]","[181, 97, 820, 218]","[181, 97(...TRUNCATED)
[ 0, 2, 5, 6, 7, 8, 24, 25, 25, 26, 29, 31, 31, 32, 32 ]
"[\n {\n \"type\": \"text\",\n \"text\": \"BIOCLIP 2: Emergent Properties from Scaling Hierar(...TRUNCATED)
"# BIOCLIP 2: Emergent Properties from Scaling Hierarchical Contrastive Learning\n\nJianyang $\\math(...TRUNCATED)
00274
Approximate Domain Unlearning for Vision-Language Models
00274_Approximate_Domain_Unlearning_for_Vision-Language_Models
"JVBERi0xLjUKJb/3ov4KNzI3IDAgb2JqCjw8IC9MaW5lYXJpemVkIDEgL0wgMTg2OTcwOCAvSCBbIDI4ODUgNjAzIF0gL08gNzM(...TRUNCATED)
"Kodai Kawamura∗1,2, Yuta Goto∗1, Rintaro Yanagi3, Hirokatsu Kataoka3,4, Go Irie1 \n1Tokyo Unive(...TRUNCATED)
9
[{"src":"https://huggingface.co/proxy/datasets-server.huggingface.co/assets/Samarth0710/neurips2025-papers/--/{dataset_gi(...TRUNCATED)
["Figure 1: Illustration of Approximate Domain Unlearning (ADU). ADU is a novel approximate unlearni(...TRUNCATED)
["[504, 175, 813, 334]","[176, 92, 823, 271]","[222, 90, 776, 219]","[178, 305, 820, 422]","[230, 11(...TRUNCATED)
[ 1, 2, 7, 7, 8, 9, 26, 27, 28 ]
"[\n {\n \"type\": \"text\",\n \"text\": \"Approximate Domain Unlearning for Vision-Language (...TRUNCATED)
"# Approximate Domain Unlearning for Vision-Language Models\n\nKodai Kawamura∗1,2, Yuta Goto∗1, (...TRUNCATED)
00279
VoxDet: Rethinking 3D Semantic Scene Completion as Dense Object Detection
00279_VoxDet_Rethinking_3D_Semantic_Scene_Completion_as_Dense_Object_Detection
"JVBERi0xLjUKJb/3ov4KNzExIDAgb2JqCjw8IC9MaW5lYXJpemVkIDEgL0wgMjEyNjM4MjkgL0ggWyAzNDgxIDcwMCBdIC9PIDc(...TRUNCATED)
"Overview. Fig. 3 shows the overall workflow of our VoxDet. Given RGB input, we follow previous work(...TRUNCATED)
17
[{"src":"https://huggingface.co/proxy/datasets-server.huggingface.co/assets/Samarth0710/neurips2025-papers/--/{dataset_gi(...TRUNCATED)
["Figure 1: Schematic comparison of previous SSC paradigm [6, 2, 79] and the proposed VoxDet. Left: (...TRUNCATED)
["[176, 89, 823, 200]","[174, 90, 563, 229]","[174, 89, 821, 203]","[191, 89, 816, 188]","[176, 500,(...TRUNCATED)
[ 1, 3, 4, 5, 7, 8, 9, 23, 25, 26, 26, 26, 26, 28, 30, 33, 34 ]
"[\n {\n \"type\": \"text\",\n \"text\": \"VoxDet: Rethinking 3D Semantic Scene Completion as(...TRUNCATED)
"# VoxDet: Rethinking 3D Semantic Scene Completion as Dense Object Detection\n\nWuyang Li1 Zhu $\\ma(...TRUNCATED)
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
10