Skip to content

Custom search loops

Drive Study, Sampler, and search spaces directly when optimize_anything's defaults don't fit.

Study and Sampler are the search primitives optimize_anything and dn capability improve build on. Drop to them when the wrappers’ defaults don’t fit — a search that isn’t instruction optimization, a custom stopping rule, a sampler that isn’t GEPA-backed reflection.

import asyncio
from dreadnode.optimization import Float, Study
from dreadnode.samplers import RandomSampler
async def objective(candidate: dict[str, object]) -> float:
temperature = float(candidate["temperature"])
return 1.0 - abs(temperature - 0.4)
async def main() -> None:
sampler = RandomSampler(
search_space={
"temperature": Float(0.0, 1.0),
"style": ["concise", "teacher", "technical"],
},
seed=42,
)
study = Study(
name="prompt-shape-search",
objective=objective,
sampler=sampler,
direction="maximize",
n_iterations=8,
)
result = await study.console()
print(result.best_trial.score, result.best_trial.candidate)
asyncio.run(main())
PieceWhat it does
Studyowns the objective, run loop, stopping conditions, and final result
Samplerproposes the next candidate or batch of candidates
Trialone evaluated candidate and its score
search spacetyped parameter definitions such as Float, Int, and Categorical

A Study calls the sampler for candidates, passes each to the objective function, records the trial, and stops when a stopping condition fires or n_iterations is hit.

The objective here is a Python callable that returns a score — distinct from the free-text objective string that optimize_anything passes to the GEPA proposer.

The standard search-space helpers are:

  • Float(min, max)
  • Int(min, max)
  • Categorical([...]) — bare lists are coerced automatically
  • SearchSpace(...) when you want an explicit composed object

Use categorical values for discrete prompt templates or policy choices. Use numeric ranges for temperatures, thresholds, budgets, or other tunables.

You do not need the “best” sampler in the abstract. You need the one that matches the shape of the problem.

SamplerGood starting use case
RandomSamplerCheap baseline, small search spaces, first-pass exploration.
GridSamplerExhaustive sweeps over a small discrete space.
OptunaSamplerClassical hyperparameter search over numeric spaces.
beam_search_samplerPrompt refinement with multiple strong candidates kept alive.
graph_neighborhood_samplerStructured mutation over graph-like neighborhoods.
iterative_samplerSingle-thread refinement that keeps improving on the best trial so far.
FuzzingSampler / fuzzing_samplerMutation-heavy generation from seed prompts.
MAPElitesSampler / mapelites_samplerQuality-diversity exploration when you want varied successful candidates.
StrategyLibrarySampler / strategy_library_samplerAttack patterns drawn from a library of labeled strategies.

All of these import from dreadnode.samplers.

AIRT ships additional samplers for image-space adversarial work (SimBASampler, NESSampler, ZOOSampler, BoundarySampler, HopSkipJumpSampler, RandomImageSampler) and wraps this same study machinery behind attack factories like pair_attack and crescendo_attack. See AIRT SDK for that surface.

Most workflows that search a space hide the study behind a higher-level wrapper — optimize_anything for prompt and capability work, attack factories for AIRT. Step down to Study directly when you want to:

  • customize the search loop instead of accepting wrapper defaults
  • build an iterative search that is neither optimization nor AIRT
  • read result.trials directly to understand what an attack or optimization actually produced

Start with:

  • result.best_trial
  • result.trials
  • the candidate history
  • the score trajectory over time

Trace-enabled studies also surface the trial progression in tracing and console output.