prithivMLmods commited on
Commit
5f16436
·
verified ·
1 Parent(s): e101836

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -11,7 +11,7 @@ tags:
11
  - llama.cpp
12
  ---
13
 
14
- # **TimeLens-8B**
15
 
16
  > TimeLens-8B from TencentARC is an 8B-parameter multimodal vision-language model fine-tuned from Qwen3-VL-8B-Instruct using a novel RLVR (reinforcement learning with verifiable rewards) recipe on the high-quality TimeLens-100K VTG dataset, achieving state-of-the-art video temporal grounding performance among open-source models with 72.0% R1@0.3 (Charades-TimeLens), 64.5% R1@0.3 (ActivityNet-TimeLens), and 75.6% R1@0.3 (QVHighlights-TimeLens), significantly outperforming baselines like Qwen3-VL-8B-Instruct and Qwen2.5-VL-7B. Designed for precise localization of visual events described by natural language queries, it outputs timestamped segments in the format "The event happens in <start time> - <end time> seconds" using low FPS=2 sampling (min_pixels=642828, total_pixels=143362828) for efficient video processing via Transformers with Flash-Attention-2 support. Released with code, project page, and TimeLens-Bench evaluation suite, it excels on Charades-TimeLens, ActivityNet-TimeLens, and QVHighlights-TimeLens leaderboards for research in video understanding, temporal reasoning, and event detection.
17
 
 
11
  - llama.cpp
12
  ---
13
 
14
+ # **TimeLens-8B-GGUF**
15
 
16
  > TimeLens-8B from TencentARC is an 8B-parameter multimodal vision-language model fine-tuned from Qwen3-VL-8B-Instruct using a novel RLVR (reinforcement learning with verifiable rewards) recipe on the high-quality TimeLens-100K VTG dataset, achieving state-of-the-art video temporal grounding performance among open-source models with 72.0% R1@0.3 (Charades-TimeLens), 64.5% R1@0.3 (ActivityNet-TimeLens), and 75.6% R1@0.3 (QVHighlights-TimeLens), significantly outperforming baselines like Qwen3-VL-8B-Instruct and Qwen2.5-VL-7B. Designed for precise localization of visual events described by natural language queries, it outputs timestamped segments in the format "The event happens in <start time> - <end time> seconds" using low FPS=2 sampling (min_pixels=642828, total_pixels=143362828) for efficient video processing via Transformers with Flash-Attention-2 support. Released with code, project page, and TimeLens-Bench evaluation suite, it excels on Charades-TimeLens, ActivityNet-TimeLens, and QVHighlights-TimeLens leaderboards for research in video understanding, temporal reasoning, and event detection.
17