Custom search loops
Drive Study, Sampler, and search spaces directly when optimize_anything's defaults don't fit.
Study and Sampler are the search primitives optimize_anything and dn capability improve
build on. Drop to them when the wrappers’ defaults don’t fit — a search that isn’t instruction
optimization, a custom stopping rule, a sampler that isn’t GEPA-backed reflection.
import asyncio
from dreadnode.optimization import Float, Studyfrom dreadnode.samplers import RandomSampler
async def objective(candidate: dict[str, object]) -> float: temperature = float(candidate["temperature"]) return 1.0 - abs(temperature - 0.4)
async def main() -> None: sampler = RandomSampler( search_space={ "temperature": Float(0.0, 1.0), "style": ["concise", "teacher", "technical"], }, seed=42, )
study = Study( name="prompt-shape-search", objective=objective, sampler=sampler, direction="maximize", n_iterations=8, )
result = await study.console() print(result.best_trial.score, result.best_trial.candidate)
asyncio.run(main())The mental model
Section titled “The mental model”| Piece | What it does |
|---|---|
Study | owns the objective, run loop, stopping conditions, and final result |
Sampler | proposes the next candidate or batch of candidates |
Trial | one evaluated candidate and its score |
| search space | typed parameter definitions such as Float, Int, and Categorical |
A Study calls the sampler for candidates, passes each to the objective function, records the
trial, and stops when a stopping condition fires or n_iterations is hit.
The objective here is a Python callable that returns a score — distinct from the free-text
objective string that optimize_anything passes to the GEPA proposer.
Search spaces
Section titled “Search spaces”The standard search-space helpers are:
Float(min, max)Int(min, max)Categorical([...])— bare lists are coerced automaticallySearchSpace(...)when you want an explicit composed object
Use categorical values for discrete prompt templates or policy choices. Use numeric ranges for temperatures, thresholds, budgets, or other tunables.
Choose a sampler by search style
Section titled “Choose a sampler by search style”You do not need the “best” sampler in the abstract. You need the one that matches the shape of the problem.
| Sampler | Good starting use case |
|---|---|
RandomSampler | Cheap baseline, small search spaces, first-pass exploration. |
GridSampler | Exhaustive sweeps over a small discrete space. |
OptunaSampler | Classical hyperparameter search over numeric spaces. |
beam_search_sampler | Prompt refinement with multiple strong candidates kept alive. |
graph_neighborhood_sampler | Structured mutation over graph-like neighborhoods. |
iterative_sampler | Single-thread refinement that keeps improving on the best trial so far. |
FuzzingSampler / fuzzing_sampler | Mutation-heavy generation from seed prompts. |
MAPElitesSampler / mapelites_sampler | Quality-diversity exploration when you want varied successful candidates. |
StrategyLibrarySampler / strategy_library_sampler | Attack patterns drawn from a library of labeled strategies. |
All of these import from dreadnode.samplers.
AIRT ships additional samplers for image-space adversarial work (SimBASampler, NESSampler,
ZOOSampler, BoundarySampler, HopSkipJumpSampler, RandomImageSampler) and wraps this same
study machinery behind attack factories like pair_attack and crescendo_attack. See
AIRT SDK for that surface.
When to step down to a study
Section titled “When to step down to a study”Most workflows that search a space hide the study behind a higher-level wrapper —
optimize_anything for prompt and capability work, attack factories for AIRT. Step down to Study
directly when you want to:
- customize the search loop instead of accepting wrapper defaults
- build an iterative search that is neither optimization nor AIRT
- read
result.trialsdirectly to understand what an attack or optimization actually produced
What to inspect in a result
Section titled “What to inspect in a result”Start with:
result.best_trialresult.trials- the candidate history
- the score trajectory over time
Trace-enabled studies also surface the trial progression in tracing and console output.