LLMs
updated
Self-Boosting Large Language Models with Synthetic Preference Data
Paper
•
2410.06961
•
Published
•
16
Paper
•
2412.15115
•
Published
•
376
SCOPE: Optimizing Key-Value Cache Compression in Long-context Generation
Paper
•
2412.13649
•
Published
•
21
NeoBERT: A Next-Generation BERT
Paper
•
2502.19587
•
Published
•
38
Think Inside the JSON: Reinforcement Strategy for Strict LLM Schema
Adherence
Paper
•
2502.14905
•
Published
•
9
How Much Knowledge Can You Pack into a LoRA Adapter without Harming LLM?
Paper
•
2502.14502
•
Published
•
91
From RAG to Memory: Non-Parametric Continual Learning for Large Language
Models
Paper
•
2502.14802
•
Published
•
13
LLM Pretraining with Continuous Concepts
Paper
•
2502.08524
•
Published
•
29
MMTEB: Massive Multilingual Text Embedding Benchmark
Paper
•
2502.13595
•
Published
•
43
Large-Scale Data Selection for Instruction Tuning
Paper
•
2503.01807
•
Published
•
14
Fine-Tuning Small Language Models for Domain-Specific AI: An Edge AI
Perspective
Paper
•
2503.01933
•
Published
•
13