MITRE+NVD+ExploitDB Dataset (Alpaca/ChatML/Harmony)
A dataset for training AI assistants/agents on vulnerability analysis and pentesting Q&A. It is built by the pentestds pipeline, which fetches and merges data from MITRE CVE, NVD (CVSS enrichment), ExploitDB, and a small set of HuggingFace datasets. Provenance is recorded for every entry, and the pipeline emits Alpaca, ChatML, and Harmony JSONL files.
Dataset Summary
This dataset is designed for training AI agents to understand and perform penetration testing tasks. It is built by an automated Python pipeline that:
- Downloads CVE data from MITRE
- Streams CVEs from NVD and extracts CVSS where available
- Indexes ExploitDB and links exploits to CVEs
- Loads and merges additional datasets from HuggingFace (e.g., MITRE ATT&CK reasoning, TTP mapping, SecureCode v2)
- Validates records using Pydantic schemas for Alpaca and ChatML formats
- Tracks provenance for every record
- Outputs Alpaca, ChatML, and Harmony JSONL files
- Uploads the dataset to HuggingFace when
TOKENis configured
Included data types:
- CVE Data: Real vulnerability information from MITRE
- CVSS: NVD CVSS metrics (when present)
- Exploit Code: Proof-of-concept exploit references/snippets from ExploitDB
- Secure Coding: Multi-turn secure coding dialogues and vulnerability remediations
- Red Team Techniques: MITRE ATT&CK-aligned reasoning data
- Security Mappings: TTP mapping scenarios
Supported Tasks
- Vulnerability Analysis: Understanding and explaining CVEs
- Exploit Development: Writing and understanding exploit code
- Pentesting Methodology: Planning and executing penetration tests
- Red Team Operations: Advanced persistent threat simulation
- Tool Usage: Understanding cybersecurity tools and commands
Dataset Structure & Pipeline
The dataset is available in multiple formats (Alpaca, ChatML, Harmony), generated by the pipeline. The build process can be triggered by running pentestds build.
Naming conventions
Repository name
By default, the builder uploads to:
jason-oneal/mitre-stix-cve-exploitdb-dataset-alpaca-chatml-harmony
This name comes from the builder’s USERNAME + a fixed suffix.
File names
Each format is written as JSONL and split into train/validate:
- Alpaca:
alpaca_train.jsonl,alpaca_validate.jsonl - ChatML:
chatml_train.jsonl,chatml_validate.jsonl - Harmony:
harmony_train.jsonl,harmony_validate.jsonl(each line is{"text": "...raw Harmony tokens..."})
Metadata files:
- Dataset card:
README.md(uploaded from the builder repo’scard.md) - Citation:
CITATION.cff - Provenance:
provenance.json
Alpaca Format
{
"instruction": "Explain CVE-2023-1234",
"input": "",
"output": "CVE-2023-1234 is a critical vulnerability in Example Software..."
}
ChatML Format
{
"messages": [
{"role": "user", "content": "Explain CVE-2023-1234"},
{"role": "assistant", "content": "CVE-2023-1234 is a critical vulnerability in Example Software..."}
]
}
Data Sources
| Source | Type | License | Records | URL |
|---|---|---|---|---|
| MITRE CVE Database | CVE | MITRE CVE License | varies | https://cve.mitre.org/ |
| National Vulnerability Database | CVE | NIST License | varies | https://nvd.nist.gov/ |
| Exploit Database | EXPLOIT | ExploitDB License | varies | https://www.exploit-db.com/ |
| MITRE ATT&CK Reasoning | REDTEAM | Apache-2.0 | ~300 | https://huggingface.co/datasets/cobo512/Mitre-ATTACK-reasoning-dataset |
| Security TTP Mapping | SCENARIO | Apache-2.0 | ~500 | https://huggingface.co/datasets/tumeteor/Security-TTP-Mapping |
| SecureCode v2 | SCENARIO | Apache-2.0 | ~430 | https://huggingface.co/datasets/scthornton/securecode-v2 |
Data Processing
Content Validation
All data undergoes content validation to ensure quality and consistency.
Content Cleaning
Content is cleaned and validated to ensure proper formatting and length.
Validation
All records are validated against Pydantic schemas to ensure data quality and format consistency.
Train/Validation Split
The dataset is split using deterministic hash-based partitioning with optional stratification by source or license type.
Usage
Loading the Dataset
from datasets import load_dataset
repo_id = "jason-oneal/mitre-stix-cve-exploitdb-dataset-alpaca-chatml-harmony"
# Load specific format files directly
alpaca_train = load_dataset(repo_id, data_files={"train": "alpaca_train.jsonl"})["train"]
chatml_train = load_dataset(repo_id, data_files={"train": "chatml_train.jsonl"})["train"]
harmony_train = load_dataset(repo_id, data_files={"train": "harmony_train.jsonl"})["train"]
Example Usage
# Get a sample record from alpaca format
alpaca_sample = alpaca_train[0]
print(f"Instruction: {alpaca_sample['instruction']}")
print(f"Output: {alpaca_sample['output']}")
# Get a sample record from chatml format
chatml_sample = chatml_train[0]
for message in chatml_sample['messages']:
print(f"{message['role']}: {message['content']}")
Training Example
from transformers import AutoTokenizer, AutoModelForCausalLM
# Load model and tokenizer
model = AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-medium")
tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-medium")
# Prepare data
def format_prompt(example):
if 'messages' in example:
# ChatML format
return "\n".join([f"{msg['role']}: {msg['content']}" for msg in example['messages']])
else:
# Alpaca format
return f"Instruction: {example['instruction']}\nOutput: {example['output']}"
# Tokenize and train
# ... training code ...
Data Quality
Validation Results
- Total Records Processed: ~250,000
- Valid Records: ~245,000 (98%)
- Duplicates Removed: ~5,000
- Content Cleaned: ~1,000
Quality Metrics
- Schema Compliance: 100% (all records pass Pydantic validation)
- Total Records: ~3,730+ (combined from all sources)
- Source Attribution: 100% (all records have provenance tracking)
Limitations and Biases
Known Limitations
- Language: Dataset is primarily in English
- Temporal Coverage: CVE data limited to available years
- Tool Coverage: Focus on common pentesting tools
- Scenario Diversity: Limited to available pentesting scenarios
Potential Biases
- Source Bias: Heavy reliance on MITRE/NVD for vulnerability data
- Tool Bias: Focus on popular open-source tools
- Geographic Bias: Primarily Western cybersecurity practices
Citation
If you use this dataset in your research, please cite:
@misc{mitre_nvd_exploitdb_dataset,
title={MITRE+NVD+ExploitDB Dataset (Alpaca/ChatML/Harmony)},
author={Jason O'Neal},
year={2024},
url={https://huggingface.co/datasets/jason-oneal/mitre-stix-cve-exploitdb-dataset-alpaca-chatml-harmony}
}
License
This dataset is licensed under Apache-2.0. Individual data sources retain their original licenses:
- MITRE CVE: Public domain
- ExploitDB: Various licenses per exploit
- HuggingFace Datasets: Apache-2.0
Contributing
Contributions are welcome! Please see the repository for contribution guidelines.
Updates
This dataset is updated daily via automated GitHub Actions workflows. Each update includes:
- Latest CVE data from MITRE
- Latest CVSS enrichment from NVD (when available)
- New exploits from ExploitDB
- Updated secure coding scenarios from SecureCode v2
- Updated MITRE ATT&CK reasoning and TTP mapping datasets (when accessible)
Contact
For questions or issues:
- GitHub Issues: Repository Issues
- Email: jason.allen.oneal@gmail.com
Generated by the Pentest Dataset Builder Pipeline
- Downloads last month
- 177