Examples
Start from the real SDK scripts and notebooks shipped in the repo instead of inventing workflows from scratch.
The fastest way to understand the SDK is often to run one of the shipped examples and then read the corresponding guide page alongside it.
All examples live under packages/sdk/examples/:
- scripts:
packages/sdk/examples/scripts - notebooks:
packages/sdk/examples/notebooks
Run scripts from packages/sdk with uv run python ....
cd packages/sdkuv run python examples/scripts/agent_with_tools.pyfrom pathlib import Path
SDK_EXAMPLES = Path("packages/sdk/examples")print(SDK_EXAMPLES / "scripts" / "agent_with_tools.py")print(SDK_EXAMPLES / "notebooks" / "agentic_red_teaming.ipynb")Script examples
Section titled “Script examples”| File | What it demonstrates | Read this first |
|---|---|---|
agent_with_tools.py | core Agent loop, Python tools, trajectories | Agents and Tools |
basic_tracing.py | spans, trace grouping, and local observability | Tracing |
evaluation_with_scorers.py | Evaluation, dataset rows, built-in and custom scorers | Evaluations and Scorers |
optimization_study.py | Study, RandomSampler, and search-space tuning | Studies & Samplers |
submit_training_job.py | publishing artifacts, then submitting a hosted SFT job | Packages & Capabilities and Training |
world_manifest_and_trajectories.py | low-level worlds control-plane calls from Python | API Client |
airt_pair.py | single-attack AIRT workflow with pair_attack | AIRT |
airt_crescendo.py | multi-turn red teaming with Crescendo | AIRT |
airt_trace.py | tracing around attack execution | AIRT and Tracing |
multi_attack_assessment.py | one Assessment containing several attack families | AIRT |
Notebook examples
Section titled “Notebook examples”| File | What it demonstrates | Read this first |
|---|---|---|
agentic_red_teaming.ipynb | end-to-end agentic red teaming workflow | AIRT |
openai_agentic_red_teaming.ipynb | provider-specific red teaming walkthrough | AIRT |
pair_attack.ipynb | notebook-friendly PAIR workflow | AIRT |
crescendo_with_transforms.ipynb | Crescendo plus input transforms | Transforms |
tree_of_attacks_with_transforms.ipynb | TAP-style attacks with transforms | Transforms and AIRT |
graph_of_attacks_with_transforms.ipynb | graph-style attack search | Studies & Samplers |
multimodal_attacks_transforms.ipynb | multimodal attack surface plus transforms | Transforms |
ide_coding_assistant_attacks.ipynb | IDE-agent attack patterns | AIRT |
compliance_tagging.ipynb | transform and attack tagging for reporting | Transforms |
Which example to run first
Section titled “Which example to run first”If you are new to the SDK:
agent_with_tools.pyevaluation_with_scorers.pyoptimization_study.py- one AIRT script such as
airt_pair.py
If you already know the basics and care about platform-backed workflows:
submit_training_job.pyworld_manifest_and_trajectories.pymulti_attack_assessment.py
Environment expectations
Section titled “Environment expectations”Most examples assume one or both of these:
- provider credentials such as
OPENAI_API_KEY - Dreadnode config via
dn.configure()orDREADNODE_*environment variables
Examples that create hosted jobs also assume:
- a reachable platform server
- an API key with access to the target organization and workspace
- published or publishable artifacts such as capabilities and datasets
Practical advice
Section titled “Practical advice”Treat the examples as working starting points, not perfect architecture. The best workflow is:
- run the closest example
- confirm it works in your environment
- copy only the parts you actually need into your own codebase
That is usually faster and safer than starting from a blank file.