Studies & Samplers
Use Study, Sampler, and search spaces to run iterative search loops in optimization and AIRT.
Studies and samplers are the search backbone behind much of the SDK. If an SDK workflow explores a
space of candidates over multiple trials, there is usually a Study and a Sampler underneath it.
The mental model
Section titled “The mental model”| Piece | What it does |
|---|---|
Study | owns the objective, run loop, stopping conditions, and final result |
Sampler | proposes the next candidate or batch of candidates |
Trial | one evaluated candidate and its score |
| search space | typed parameter definitions such as Float, Int, and Categorical |
Optimization uses this model directly. AIRT attacks usually wrap it in a higher-level attack factory, but the underlying execution is still a study.
Run a simple study
Section titled “Run a simple study”import asyncio
from dreadnode.optimization import Float, Studyfrom dreadnode.samplers.random import RandomSampler
async def objective(candidate: dict[str, object]) -> float: temperature = float(candidate["temperature"]) return 1.0 - abs(temperature - 0.4)
async def main() -> None: sampler = RandomSampler( search_space={ "temperature": Float(0.0, 1.0), "style": ["concise", "teacher", "technical"], }, seed=42, )
study = Study( name="prompt-shape-search", objective=objective, sampler=sampler, direction="maximize", n_iterations=8, )
result = await study.console() print(result.best_trial.score, result.best_trial.candidate)
asyncio.run(main())This is the base pattern to understand before you move into more automated attack or optimization workflows.
Search spaces
Section titled “Search spaces”The standard search-space helpers are:
Float(min, max)Int(min, max)Categorical([...])SearchSpace(...)when you want an explicit composed object
Use categorical values for discrete prompt templates or policy choices. Use numeric ranges for temperatures, thresholds, budgets, or other tunables.
Choose a sampler by search style
Section titled “Choose a sampler by search style”You do not need the “best” sampler in the abstract. You need the one that matches the shape of the problem.
| Sampler | Good starting use case |
|---|---|
RandomSampler | cheap baseline, small search spaces, first-pass exploration |
GridSampler | exhaustive sweeps over a small discrete space |
OptunaSampler | classical hyperparameter search over numeric spaces |
beam_search_sampler | prompt refinement with multiple strong candidates kept alive |
graph_neighborhood_sampler | structured mutation over graph-like neighborhoods |
FuzzingSampler / fuzzing_sampler | mutation-heavy generation from seed prompts |
MAPElitesSampler / mapelites_sampler | quality-diversity exploration when you want varied successful candidates |
Examples from the shipped surface:
pair_attackusesbeam_search_samplercrescendo_attackusesiterative_sampler- many jailbreak workflows rely on search-plus-refinement rather than one-shot prompting
When AIRT uses this page’s concepts
Section titled “When AIRT uses this page’s concepts”You do not always need to instantiate Study yourself. Attack factories already do that for you.
But this page becomes useful when you want to:
- understand what an attack result actually is
- customize the search loop instead of taking attack defaults
- build your own iterative search workflow that is not quite optimization and not quite AIRT
What to inspect in a result
Section titled “What to inspect in a result”Start with:
result.best_trialresult.trials- the candidate history
- the score trajectory over time
If the study is trace-enabled, the trial progression is also visible in tracing and console output.