Datasets:
image imagewidth (px) 1.21k 3.84k |
|---|
π OmniTraffic: A Large-scale Multi-view Spatiotemporal Dataset for Traffic Understanding
π Dataset Summary
Welcome to the OmniTraffic Dataset repository. This repository specifically hosts the complete OmniTraffic Dataset, containing the massive underlying pool of over 8 million generated VQA samples and ~280GB of multimodal data. It is designed for large-scale pre-training, fine-tuning, and pushing the scaling laws of multimodal large language models (MLLMs) and autonomous driving systems.
β οΈ Looking for Evaluation? If you are looking for our highly-curated, human-validated subset (3,200 instances) for strict evaluation and run tests on MLLMs for traffic understanding and reasoning capability, please visit our companion OmniTraffic Benchmark repository.
Figure 1: Overview of the OmniTraffic Benchmark, illustrating the multi-view perception, diverse spatiotemporal annotations, and the hierarchical evaluation framework.
π Key Features
- Spatiotemporal Reasoning: Requires models to track object trajectories, infer future states, and reason over multi-frame contexts.
- Multi-View & BEV Integration: Covers comprehensive perception angles (Front, Rear, Sides, BEV) and challenges models with cross-view matching tasks.
- Hierarchical Evaluation: Features a unique three-level evaluation framework testing foundation perception, spatiotemporal prediction, and strategic planning.
πΌοΈ Visual Showcase: Simulation vs. Real-World
To demonstrate the high fidelity of our generated scenarios, below are comparisons between the OmniTraffic simulation environments and real-world traffic scenes. Our pipeline aims to minimize the sim-to-real gap by rendering realistic lighting, diverse weather conditions, and complex multi-agent behaviors.
Up: Simulation in OmniTraffic 12 Road Junctions from Real World. Down: Real-World Driving Scene from Road Camera.
π Three-Level Evaluation Framework
To systematically evaluate model capabilities, OmniTraffic introduces a progressive three-level framework:
- Level 1: Foundation Perception Focuses on object detection, state recognition, counting, and basic spatial relationships across different camera views.
- Level 2: Spatiotemporal Prediction Requires models to understand temporal dynamics, match BEV with multi-view images, predict trajectories, and identify hazards.
- Level 3: Strategic Planning & Reasoning The most advanced level, asking the model to act as the ego-vehicle driver or traffic controller, making complex decisions based on global traffic flow and multi-agent interactions.
π Scenario Directory Structure
To ensure consistency and ease of use, all unzipped scenario chunks share a unified hierarchical structure. The dataset primarily features two junction topologies: T-junctions (3 incoming views) and Cross-intersections (4 incoming views).
Below is the exact file structure. Items marked with [ ] are conditionally present depending on the junction type.
<scenario_name>/ # e.g., urban_intersection_001
βββ <timestep_id>/ # e.g., 0, 100, 200 (representing independent timesteps)
β βββ 3d_vehs.json # 3D vehicle metadata (position, heading, 3D model type)
β βββ step_info.json # Temporal metadata and traffic signal control state
β βββ state_vector.npy # Encoded traffic state vector (NumPy array)
β βββ annotations/ # Structured traffic scenario annotations
β β βββ 0.json # Annotation for view 0
β β βββ 1.json # Annotation for view 1
β β βββ 2.json # Annotation for view 2
β β βββ [3.json] # Annotation for view 3 (It depends on the specific intersection.)
β βββ QA/ # Visual Question Answering (VQA) pairs
β β βββ 0.json # VQA pairs for view 0
β β βββ 1.json # VQA pairs for view 1
β β βββ 2.json # VQA pairs for view 2
β β βββ [3.json] # VQA pairs for view 3 (It depends on the specific intersection.)
β βββ high_quality_rgb/ # High-resolution rendered RGB images
β β βββ 0.png # RGB image from camera view 0
β β βββ 1.png # RGB image from camera view 1
β β βββ 2.png # RGB image from camera view 2
β β βββ [3.png] # RGB image from camera view 3 (It depends on the specific intersection.)
β β βββ bev.png # Bird's-Eye View (BEV) map
β βββ low_quality_rgb/ # Auxiliary/down-sampled RGB images
β β βββ 0.png # RGB image from camera view 0
β β βββ 1.png # RGB image from camera view 1
β β βββ 2.png # RGB image from camera view 2
β β βββ [3.png] # RGB image from camera view 3 (It depends on the specific intersection.)
β β βββ bev.png # Bird's-Eye View (BEV) map
βββ <timestep_id_2>/
β βββ ... (same structure as above)
βββ README.md # Scenario-specific meta-description
π Dataset Instances
Example 1: Single Image Perception (Counting & Infrastructure) Corresponds to Level 1. A foundational task evaluating the model's ability to accurately detect and count specific road elements.
{
"question": "How many right-turn lanes are there in the incoming direction?",
"answer": "There is 1 right-turn lanes.",
"options": {
"A": "0",
"B": "1",
"C": "2",
"D": "3"
},
"correct_answer": "B",
"category": "Road Infrastructure",
"task": "Single Image",
"subtask": "Counting",
"capabilities": [
"Lane Detection",
"Spatial Understanding"
],
"image_path": "images/171/3.png",
"direction": 3,
"timestep": "171"
}
Example 2: Cross-View Reasoning (BEV to Multi-camera Matching) Corresponds to Level 2. An advanced task evaluating spatial reasoning by asking the model to match a specific driving direction on a BEV map to its corresponding multi-view camera image.
{
"question": "Given a BEV (bird's-eye view), the task is to determine the driving direction associated with the star-shaped marker. Please select the correct description from the options provided.",
"options": {
"A": "View from direction 0",
"B": "View from direction 1",
"C": "View from direction 2"
},
"correct_answer": "A",
"answer_text": "View from direction 0",
"bev_image": "images/430/bev_star.png",
"option_images": [
"images/430/0.png",
"images/430/1.png",
"images/430/2.png"
],
"images": [
"images/430/bev.png",
"images/430/0.png",
"images/430/1.png",
"images/430/2.png"
],
"target_timestep": "430",
"question_type": "bev_to_view",
"category": "Scene Understanding",
"task": "Cross-Timestep Multi Image",
"subtask": "BEV to View Matching",
"capabilities": [
"Spatial Reasoning",
"Cross-Timestep Analysis",
"BEV Understanding",
"View Matching"
]
}
Example 3: Strategic Decision Making (Multi-View Traffic Signal Control) Corresponds to Level 3. The most complex task, evaluating the model's capacity for strategic planning and multi-directional reasoning to control traffic signals based on global traffic flow.
{
"question": "Based on the current traffic conditions shown in all direction images, what is the optimal traffic signal phase decision? The phase information is as follows:\nPhase 0: All lanes of Image 2;\nPhase 1: All lanes of Image 1;\nPhase 2: All lanes of Image 0;\nPhase 3: All lanes of Image 3;\n\nPlease provide the phase number.",
"answer": "The optimal decision is to switch to Phase 3, which corresponds to image index 3 (granting green light to this direction).",
"options": {
"A": "Phase 0",
"B": "Phase 1",
"C": "Phase 3",
"D": "Phase 2"
},
"correct_answer": "C",
"category": "Comprehensive Analysis",
"task": "Multi Image",
"subtask": "Decision Making",
"capabilities": [
"Object Detection",
"Vehicle Classification",
"Traffic Flow Analysis",
"Priority Assessment",
"Multi-Directional Reasoning",
"Traffic Signal Control"
],
"images": [
"images/425/0.png",
"images/425/1.png",
"images/425/2.png",
"images/425/3.png"
],
"timestep": "425"
}
Data Fields
Note: Depending on the specific task (e.g., Single Image vs. Multi-Image), some fields may be dynamically present or absent.
question(string): The main VQA prompt describing the task logic.options(dict): A dictionary mapping choice letters (e.g., "A", "B", "C", "D") to their respective text descriptions.correct_answer(string): The key corresponding to the correct option.answer/answer_text(string): The detailed textual explanation or content of the correct answer.category(string): The overarching domain of the question.task(string): The input format or task scope.subtask(string): The specific analytical goal.capabilities(listofstring): The core perception and reasoning skills required to solve the problem.image_path(string): (For Single Image tasks) Relative path to the visual input associated with the query.bev_image(string): (For BEV tasks) Relative path to the Bird's-Eye View image used as the visual prompt.option_images/images(listofstring): (For Multi-Image tasks) Lists of image paths corresponding to the context or multiple-choice options.direction(int): An identifier mapping to the specific camera view or sensor angle.timestep/target_timestep(string): The specific temporal frame or time index targeted by the query.question_type(string): Identifier for the structural format of the question.
π οΈ Data Construction & Collection
The OmniTraffic benchmark was constructed through a rigorous two-stage process:
- Stage 1 (Massive Generation): 8 million QA pairs were generated via an automated pipeline covering diverse simulated urban, suburban, and highway scenarios.
- Stage 2 (Expert Validation): A representative subset of 3,200 instances was sampled across all task levels and capabilities. These instances underwent strict human-in-the-loop verification to correct ambiguous questions, rectify spatial errors, and ensure accurate ground-truth matches.
- Privacy & Anonymization: All data has been processed to remove personally identifiable information (PII).
π§° Provided Tools & Scripts (VQA Generation)
To facilitate further research, dataset expansion, and custom evaluation setups, this repository includes an automated VQA generation script. Users can utilize this provided script to dynamically generate their own customized QA pairs directly from the raw image and JSON annotation files. Check the repository files for the script and usage instructions.
β οΈ Limitations & Bias
While the OmniTraffic Benchmark provides a robust evaluation suite, the current version primarily covers standardized simulated traffic scenarios. Extreme weather conditions (e.g., severe blizzards), highly unstructured rural roads, and rare long-tail edge cases may be underrepresented. Users should exercise caution when extrapolating benchmark performance directly to real-world, physical autonomous driving deployment without further rigorous domain adaptation.
π§ Maintenance Plan
- Updates & Errata: We will actively monitor community feedback via Hugging Face Discussions and the GitHub issue tracker. Corrections to annotations or corrupted files will be released as new dataset versions (e.g.,
v1.1). - Hosting: The benchmark will be permanently hosted on the Hugging Face Hub.
- Downloads last month
- 483