Export
Export AI red teaming findings as PDF reports, Parquet data files, and CLI-generated reports.
Dreadnode provides multiple ways to export AI red teaming results for stakeholders, data analysis, adversarial training, and compliance records.
Export PDF Report
Section titled “Export PDF Report”Click Export PDF Report from the top-right of the Overview page to generate a downloadable PDF. The report includes:
- Executive summary with risk level and key metrics
- Severity distribution chart
- Top findings ranked by score
- Compliance framework mapping (OWASP, MITRE ATLAS, NIST, Google SAIF)
- Model configuration details (target, attacker, judge models)
- Recommendations based on findings
This is the primary deliverable for stakeholders who need a go/no-go decision on model deployment. Share it with CISOs, VP of Product, safety leads, and governance teams.
Download Parquet
Section titled “Download Parquet”Click Download Parquet from the top-right of the findings table to export all findings as an Apache Parquet file.
The Parquet file contains every column from the findings table:
| Field | Description |
|---|---|
| severity | Finding severity level (Critical, High, Medium, Low, Info) |
| score | Jailbreak score (0.0 to 1.0) |
| goal | The attack objective |
| attack | Attack strategy that produced the finding |
| category | Harm category |
| type | Finding type (jailbreak, partial, refusal) |
| transforms | Transforms applied |
| trace_id | Link back to the full trace in the platform |
| created_at | When the finding was recorded |
| updated_at | When the finding was last modified |
Use cases for Parquet export
Section titled “Use cases for Parquet export”- Post-safety-training improvement - load successful attack prompts and target responses into your adversarial fine-tuning pipeline. Every jailbreak in the file is a training signal that directly addresses a real vulnerability the model has.
- Risk mitigation evidence - provide concrete, auditable evidence of where the model fails. This is what safety teams need to prioritize mitigations and demonstrate due diligence to compliance stakeholders.
- Custom analysis - load into Python with pandas or polars for analysis beyond what the dashboard provides:
import polars as pl
findings = pl.read_parquet("findings.parquet")
# Which transforms have highest success rate?findings.filter(pl.col("type") == "jailbreak") \ .group_by("transforms") \ .agg(pl.count().alias("jailbreaks")) \ .sort("jailbreaks", descending=True)
# Which goals are most vulnerable?findings.filter(pl.col("score") >= 0.9) \ .group_by("goal") \ .agg(pl.count().alias("critical_count")) \ .sort("critical_count", descending=True)- BI tools - import into Tableau, Looker, or Power BI for organization-wide reporting and trend tracking across model versions
- Archival - preserve a complete record of every finding for regulatory compliance and audit trails
CLI report generation
Section titled “CLI report generation”Generate reports programmatically from the command line:
Assessment-level
Section titled “Assessment-level”# List reports for an assessmentdn airt reports <assessment-id>
# Get a specific reportdn airt report <assessment-id> <report-id>Project-level
Section titled “Project-level”# High-level summary across all assessmentsdn airt project-summary <project>
# Findings with filteringdn airt findings <project> --severity high --page 1 --page-size 20dn airt findings <project> --category harmful_content --sort-by score --sort-dir desc
# Generate a full project reportdn airt generate-project-report <project> --format bothThe --format flag accepts markdown, json, or both.
Next steps
Section titled “Next steps”- Compliance - framework mapping details
- Analytics & Reporting - deep analytics charts
- Overview Dashboard - risk metrics and findings