dreadnode.samplers
API reference for the dreadnode.samplers module.
Built-in samplers for optimization studies.
ArchiveCell
Section titled “ArchiveCell”ArchiveCell( candidate: CandidateT, fitness: float, trial_id: Any = None, iteration: int = 0,)A cell in the MAP-Elites archive storing an elite candidate.
Attributes:
candidate(CandidateT) –The elite candidate for this cell.fitness(float) –The fitness score of this candidate.trial_id(Any) –The trial ID that produced this elite.iteration(int) –When this elite was discovered.
BoundarySampler
Section titled “BoundarySampler”BoundarySampler( source: Image, target: Image, *, objective: str | None = None, threshold: float = 0.0, tolerance: float = 0.0001, max_iterations: int = 50,)Binary search sampler to find decision boundary between two images.
Performs binary search along the line between a source image and a target image to find the decision boundary. Useful for understanding model sensitivity or finding minimal perturbations.
The sampler iteratively narrows the search interval based on whether midpoint samples are adversarial (above threshold) or not.
Example
sampler = BoundarySampler( source=clean_image, target=adversarial_image, objective=“confidence”, threshold=0.5, )
Parameters:
source(Image) –The starting point (typically non-adversarial).target(Image) –The ending point (typically adversarial).objective(str | None, default:None) –Name of the score to use for boundary decisions.threshold(float, default:0.0) –Score threshold for classifying as adversarial.tolerance(float, default:0.0001) –Stop when interval is smaller than this (default: 1e-4).max_iterations(int, default:50) –Maximum number of binary search steps.
boundary
Section titled “boundary”boundary: Image | NoneReturn the found boundary image, if available.
exhausted
Section titled “exhausted”exhausted: boolReturn True when boundary search is complete.
reset() -> NoneReset binary search state.
sample
Section titled “sample”sample(history: list[Trial[Image]]) -> list[Sample[Image]]Return the midpoint sample for binary search.
tell(trials: list[Trial[Image]]) -> NoneUpdate binary search bounds based on trial result.
FuzzingSampler
Section titled “FuzzingSampler”FuzzingSampler( mutators: list[TransformLike[CandidateT, CandidateT]], initial_seeds: list[CandidateT], *, crossover_mutator: TransformLike[ tuple[CandidateT, CandidateT], CandidateT ] | None = None, selection_strategy: Literal[ "weighted", "uniform", "ucb" ] = "weighted", retention_threshold: float = 0.5, max_pool_size: int = 100, candidates_per_iteration: int = 1,)Fuzzing-based sampler with mutation operators and seed pool management.
Maintains a pool of seed templates and iteratively:
- Selects a seed using weighted selection (favoring successful seeds)
- Applies a random mutation operator to generate a new candidate
- Evaluates the candidate
- If successful (score > threshold), adds the mutated candidate to the pool
This implements the core fuzzing loop from GPTFuzzer, using weighted random selection instead of full MCTS for simplicity.
Parameters:
mutators(list[TransformLike[CandidateT, CandidateT]]) –List of mutation transforms. Each takes a seed and returns a mutated version.initial_seeds(list[CandidateT]) –Starting seed templates (human-written jailbreak prompts).crossover_mutator(TransformLike[tuple[CandidateT, CandidateT], CandidateT] | None, default:None) –Optional transform for crossover (takes two seeds, returns one).selection_strategy(Literal['weighted', 'uniform', 'ucb'], default:'weighted') –How to select seeds for mutation. “weighted” - weight by success rate (default) “uniform” - random uniform selection “ucb” - Upper Confidence Bound selectionretention_threshold(float, default:0.5) –Minimum score to retain a mutated candidate in the pool.max_pool_size(int, default:100) –Maximum seeds to keep in pool (oldest removed if exceeded).
exhausted
Section titled “exhausted”exhausted: boolFuzzing sampler never exhausts - always can generate more candidates.
pool: list[SeedEntry[CandidateT]]Get the current seed pool.
pool_size
Section titled “pool_size”pool_size: intCurrent number of seeds in the pool.
total_successes
Section titled “total_successes”total_successes: intTotal number of successful jailbreaks found.
reset() -> NoneReset sampler state (keeps initial seeds only).
sample
Section titled “sample”sample( history: list[Trial[CandidateT]],) -> list[Sample[CandidateT]]Generate new candidates by mutating seeds from the pool.
tell(trials: list[Trial[CandidateT]]) -> NoneProcess completed trials and update seed pool.
GraphSampler
Section titled “GraphSampler”GraphSampler( transform: TransformLike[ list[Trial[CandidateT]], CandidateT ], initial_candidate: CandidateT, *, branching_factor: int = 3, context_collector: TrialCollector[CandidateT] = lineage, pruning_sampler: TrialSampler[CandidateT] = top_k,)Graph-based sampler using transforms to generate new candidates.
Maintains a directed acyclic graph where nodes are trials and edges represent parent-child relationships. Uses an async transform to generate new candidates based on trial context.
For each sampling step:
- Gather context trials for each leaf using context_collector
- Apply transform to generate branching_factor children per leaf
- Return all new candidates as samples
After evaluation (via tell()), prunes to keep best candidates as leaves.
reset() -> NoneReset to initial state.
sample
Section titled “sample”sample( history: list[Trial[CandidateT]],) -> list[Sample[CandidateT]]Generate new candidates from the current leaves.
tell(trials: list[Trial[CandidateT]]) -> NoneProcess completed trials and update leaves.
GridSampler
Section titled “GridSampler”GridSampler( grid: dict[str, list[Any]], *, shuffle: bool = False, seed: int | None = None,)Exhaustive grid search over all parameter combinations.
Evaluates every combination of parameter values exactly once.
Example
sampler = GridSampler({ “model”: [“gpt-4”, “claude-3”], “temperature”: [0.3, 0.7, 1.0], })
Yields 2 * 3 = 6 candidates
Section titled “Yields 2 * 3 = 6 candidates”Parameters:
grid(dict[str, list[Any]]) –Dictionary mapping parameter names to lists of values.shuffle(bool, default:False) –If True, randomize the order of combinations.seed(int | None, default:None) –Random seed for shuffling (only used if shuffle=True).
exhausted
Section titled “exhausted”exhausted: boolTrue when all combinations have been sampled.
reset() -> NoneReset to start from the beginning.
sample
Section titled “sample”sample(history: list[Trial[dict]]) -> list[Sample[dict]]Return the next grid combination.
HopSkipJumpSampler
Section titled “HopSkipJumpSampler”HopSkipJumpSampler( source: Image, adversarial: Image | None = None, *, objective: str | None = None, adversarial_threshold: float = 0.0, norm: Norm = "l2", theta: float = 0.01, boundary_tolerance: float | None = None, step_size: float | None = None, min_evaluations: int = 50, max_evaluations: int = 100, max_iterations: int = 1000, seed: int | None = None,)HopSkipJump attack sampler for black-box adversarial attacks.
A decision-based attack that uses binary search to find the decision boundary and gradient estimation to minimize the perturbation distance.
See: HopSkipJumpAttack - https://arxiv.org/abs/1904.02144
Parameters:
source(Image) –The original, unperturbed image.adversarial(Image | None, default:None) –An optional initial adversarial example. If not provided, random search will be used to find one.objective(str | None, default:None) –The name of the score to use for adversarial decisions.adversarial_threshold(float, default:0.0) –Score threshold for adversarial classification.norm(Norm, default:'l2') –Distance metric (‘l2’, ‘l1’, or ‘linf’).theta(float, default:0.01) –Relative size of perturbation for gradient estimation.boundary_tolerance(float | None, default:None) –Tolerance for binary search (default: theta/10).step_size(float | None, default:None) –Initial step size ratio (default: theta).min_evaluations(int, default:50) –Minimum probes per gradient estimation.max_evaluations(int, default:100) –Maximum probes per gradient estimation.max_iterations(int, default:1000) –Maximum main iterations.seed(int | None, default:None) –Random seed for reproducibility.
reset() -> NoneReset sampler state.
sample
Section titled “sample”sample(history: list[Trial[Image]]) -> list[Sample[Image]]Generate next batch of samples.
tell(trials: list[Trial[Image]]) -> NoneProcess completed trials.
ImageSampler
Section titled “ImageSampler”ImageSampler( original: Image, *, objective: str | None = None, max_iterations: int = 1000, seed: int | None = None,)Base class for image-based adversarial samplers.
reset() -> NoneReset sampler state.
sample
Section titled “sample”sample(history: list[Trial[Image]]) -> list[Sample[Image]]Generate next batch of image candidates.
tell(trials: list[Trial[Image]]) -> NoneProcess completed trials.
MAPElitesSampler
Section titled “MAPElitesSampler”MAPElitesSampler( mutator: TransformLike[ tuple[CandidateT, MutationTarget], CandidateT ], initial_candidates: list[CandidateT], feature_dimensions: list[list[str]], *, selection_strategy: Literal[ "uniform", "sparse" ] = "uniform", candidates_per_iteration: int = 1,)MAP-Elites sampler for quality-diversity optimization.
Maintains a multidimensional archive where each cell stores the best candidate for that combination of feature values. Generates new candidates by mutating archive elites toward specific feature targets.
The archive is organized by feature dimensions (e.g., risk_category * attack_style). Each cell can hold one elite. New candidates replace existing elites only if they have higher fitness.
For Rainbow Teaming:
- Feature 1: Risk category (10 categories)
- Feature 2: Attack style (4 styles)
- Total cells: 10 * 4 = 40
Parameters:
mutator(TransformLike[tuple[CandidateT, MutationTarget], CandidateT]) –Transform that takes (parent_prompt, target_features) and generates a mutated candidate targeting those features.initial_candidates(list[CandidateT]) –Seed candidates to populate the archive initially.feature_dimensions(list[list[str]]) –List of feature value lists. Each list defines the possible values for one dimension.selection_strategy(Literal['uniform', 'sparse'], default:'uniform') –How to select parents from archive. “uniform” - random uniform selection “sparse” - prioritize under-explored cells
archive
Section titled “archive”archive: dict[tuple[int, ...], ArchiveCell[CandidateT]]Get the current archive.
coverage
Section titled “coverage”coverage: floatFraction of archive cells that are filled.
exhausted
Section titled “exhausted”exhausted: boolMAP-Elites never exhausts - always can generate more candidates.
reset() -> NoneReset sampler state.
sample
Section titled “sample”sample( history: list[Trial[CandidateT]],) -> list[Sample[CandidateT]]Generate new candidates by mutating archive elites.
tell(trials: list[Trial[CandidateT]]) -> NoneProcess completed trials and update archive.
MutationTarget
Section titled “MutationTarget”MutationTarget( feature_indices: tuple[int, ...], feature_values: tuple[str, ...],)Target cell coordinates for mutation.
Attributes:
feature_indices(tuple[int, ...]) –Tuple of indices for each feature dimension.feature_values(tuple[str, ...]) –The actual feature values (for passing to mutator).
NESSampler
Section titled “NESSampler”NESSampler( original: Image, *, objective: str | None = None, max_iterations: int = 100, learning_rate: float = 0.01, num_samples: int = 64, sigma: float = 0.001, adam_beta1: float = 0.9, adam_beta2: float = 0.999, adam_epsilon: float = 1e-08, seed: int | None = None,)Natural Evolution Strategies (NES) sampler.
Estimates gradients by probing with random perturbations in positive and negative directions, then uses Adam optimizer for updates.
See: NES - Natural Evolution Strategies
OptunaSampler
Section titled “OptunaSampler”OptunaSampler( search_space: SearchSpace, *, sampler: BaseSampler | None = None, directions: list[Literal["maximize", "minimize"]] | None = None,)Sampler using Optuna’s advanced optimization algorithms.
Wraps Optuna’s samplers (TPE, CMA-ES, etc.) for Bayesian optimization. Learns from previous trials to suggest better candidates.
Example
sampler = OptunaSampler( search_space={ “temperature”: Float(0.0, 2.0), “max_tokens”: Int(100, 1000), }, sampler=optuna.samplers.TPESampler(), )
Parameters:
search_space(SearchSpace) –Dictionary mapping parameter names to distributions.sampler(BaseSampler | None, default:None) –Optuna sampler to use. Defaults to TPESampler.directions(list[Literal['maximize', 'minimize']] | None, default:None) –Optimization directions for multi-objective. Defaults to [“maximize”].
best_params
Section titled “best_params”best_params: dict[str, Any] | NoneGet the best parameters found so far.
best_value
Section titled “best_value”best_value: float | NoneGet the best objective value found so far.
exhausted
Section titled “exhausted”exhausted: boolOptuna sampler never exhausts - always returns False.
reset() -> NoneReset the Optuna study.
sample
Section titled “sample”sample(history: list[Trial[dict]]) -> list[Sample[dict]]Ask Optuna for the next candidate.
tell(trials: list[Trial[dict]]) -> NoneInform Optuna of trial results.
RandomImageSampler
Section titled “RandomImageSampler”RandomImageSampler( shape: tuple[int, ...], *, seed: int | None = None)Generate random noise images.
Continuously generates random images with pixel values in [0, 1]. Useful for bootstrapping adversarial attacks or exploring image space.
Example
sampler = RandomImageSampler(shape=(224, 224, 3))
Parameters:
shape(tuple[int, ...]) –Shape of images to generate (height, width, channels).seed(int | None, default:None) –Random seed for reproducibility.
exhausted
Section titled “exhausted”exhausted: boolRandom image sampler never exhausts - always returns False.
reset() -> NoneReset the random number generator.
sample
Section titled “sample”sample(history: list[Trial[Image]]) -> list[Sample[Image]]Return a random noise image.
RandomSampler
Section titled “RandomSampler”RandomSampler( search_space: SearchSpace, *, seed: int | None = None)Random sampling from a search space.
Continuously samples random parameter combinations until stopped. Supports Float, Int, and Categorical distributions.
Example
sampler = RandomSampler({ “temperature”: Float(0.0, 2.0), “max_tokens”: Int(100, 1000), “model”: [“gpt-4”, “claude-3”], # shorthand for Categorical })
Parameters:
search_space(SearchSpace) –Dictionary mapping parameter names to distributions.seed(int | None, default:None) –Random seed for reproducibility.
exhausted
Section titled “exhausted”exhausted: boolRandom sampler never exhausts - always returns False.
reset() -> NoneNo-op for random sampler.
sample
Section titled “sample”sample(history: list[Trial[dict]]) -> list[Sample[dict]]Return a random sample from the search space.
SeedEntry
Section titled “SeedEntry”SeedEntry( candidate: CandidateT, successes: int = 0, attempts: int = 0, children_added: int = 0, iteration_added: int = 0,)A seed in the fuzzing pool with success tracking.
Attributes:
candidate(CandidateT) –The seed template.successes(int) –Number of times this seed produced successful jailbreaks.attempts(int) –Total number of times this seed was selected for mutation.children_added(int) –Number of successful children added to pool from this seed.iteration_added(int) –When this seed was added to the pool.
success_rate
Section titled “success_rate”success_rate: floatSuccess rate of mutations from this seed.
SimBASampler
Section titled “SimBASampler”SimBASampler( original: Image, *, objective: str | None = None, theta: float = 0.1, num_masks: int = 500, norm: Norm = "l2", max_iterations: int = 10000, seed: int | None = None,)SimBA (Simple Black-box Attack) sampler.
Iteratively perturbs the image using random noise masks and retains perturbations that improve the adversarial objective.
See: SimBA - https://arxiv.org/abs/1805.12317
Strategy
Section titled “Strategy”Strategy( name: str, description: str, template: str, embedding: list[float] | None = None, successes: int = 0, attempts: int = 0, metadata: dict[str, Any] = dict(),)A reusable attack strategy with embedding for retrieval.
Attributes:
name(str) –Short descriptive name for the strategy.description(str) –Detailed description of how the strategy works.template(str) –Template prompt that implements the strategy.embedding(list[float] | None) –Vector embedding for similarity search.successes(int) –Number of successful attacks using this strategy.attempts(int) –Total number of times this strategy was used.metadata(dict[str, Any]) –Additional metadata (source, discovered_from, etc.).
success_rate
Section titled “success_rate”success_rate: floatSuccess rate of this strategy.
from_dict
Section titled “from_dict”from_dict(data: dict[str, Any]) -> StrategyCreate from dictionary.
to_dict
Section titled “to_dict”to_dict() -> dict[str, t.Any]Convert to dictionary for serialization.
StrategyLibrarySampler
Section titled “StrategyLibrarySampler”StrategyLibrarySampler( strategy_transform: TransformLike[dict[str, Any], str], extraction_transform: TransformLike[ dict[str, Any], Strategy | None ], embedding_transform: TransformLike[str, list[float]], strategy_store: StrategyStore, *, exploration_rate: float = 0.3, top_k_strategies: int = 5, retention_threshold: float = 0.7, candidates_per_iteration: int = 1,)Strategy library sampler with embedding-based retrieval and exploration.
Implements lifelong learning where the sampler:
- Retrieves relevant strategies from library based on goal similarity
- Generates attack prompts using selected strategies
- Discovers new strategies from successful attacks
- Updates the library with new strategies
This implements the core approach from AutoDAN-Turbo: balancing exploration (discovering new strategies) with exploitation (using known successful strategies).
Parameters:
strategy_transform(TransformLike[dict[str, Any], str]) –Transform that generates attack prompts from (goal, strategies).extraction_transform(TransformLike[dict[str, Any], Strategy | None]) –Transform that extracts new strategies from successful attacks.embedding_transform(TransformLike[str, list[float]]) –Transform that computes embeddings for text.strategy_store(StrategyStore) –Persistent strategy storage.exploration_rate(float, default:0.3) –Probability of exploring new strategies vs exploiting known ones.top_k_strategies(int, default:5) –Number of similar strategies to retrieve.retention_threshold(float, default:0.7) –Minimum score to extract strategies from successful attacks.
exhausted
Section titled “exhausted”exhausted: boolStrategy sampler never exhausts - always can generate more.
total_successes
Section titled “total_successes”total_successes: intTotal number of successful attacks.
reset() -> NoneReset sampler state (preserves strategy library).
sample
Section titled “sample”sample(history: list[Trial[str]]) -> list[Sample[str]]Generate attack prompts using strategies from the library.
set_goal
Section titled “set_goal”set_goal(goal: str) -> NoneSet the current attack goal (for strategy retrieval).
tell(trials: list[Trial[str]]) -> NoneProcess completed trials and queue successful ones for strategy extraction.
StrategyStore
Section titled “StrategyStore”StrategyStore(strategies: list[Strategy] | None = None)Persistent storage for attack strategies with embedding-based retrieval.
Stores strategies with their embeddings and supports:
- Adding new strategies
- Retrieving similar strategies by embedding similarity
- Persisting to/loading from disk (JSON format)
- Tracking strategy performance over time
Parameters:
strategies(list[Strategy] | None, default:None) –Initial list of strategies.
strategies
Section titled “strategies”strategies: list[Strategy]Get all strategies.
add(strategy: Strategy) -> NoneAdd a strategy to the store.
If a strategy with the same name exists, it will be updated.
get(name: str) -> Strategy | NoneGet a strategy by name.
load(path: Path | str) -> NoneLoad strategy library from JSON file.
save(path: Path | str) -> NoneSave strategy library to JSON file.
search
Section titled “search”search( query_embedding: list[float], k: int = 5, min_similarity: float = 0.0,) -> list[tuple[Strategy, float]]Search for similar strategies using cosine similarity.
Parameters:
query_embedding(list[float]) –Query vector to search for.k(int, default:5) –Maximum number of results to return.min_similarity(float, default:0.0) –Minimum similarity threshold.
Returns:
list[tuple[Strategy, float]]–List of (strategy, similarity_score) tuples, sorted by similarity descending.
update_stats
Section titled “update_stats”update_stats(name: str, *, success: bool) -> NoneUpdate success/attempt stats for a strategy.
ZOOSampler
Section titled “ZOOSampler”ZOOSampler( original: Image, *, objective: str | None = None, max_iterations: int = 1000, learning_rate: float = 0.01, num_samples: int = 128, epsilon: float = 0.01, adam_beta1: float = 0.9, adam_beta2: float = 0.999, adam_epsilon: float = 1e-08, seed: int | None = None,)Zeroth-Order Optimization (ZOO) sampler.
Uses coordinate-wise gradient estimation with Adam optimizer.
See: ZOO - https://arxiv.org/abs/1708.03999
beam_search_sampler
Section titled “beam_search_sampler”beam_search_sampler( transform: TransformLike[ list[Trial[CandidateT]], CandidateT ], initial_candidate: CandidateT, *, beam_width: int = 3, branching_factor: int = 3, parent_depth: int = 10,) -> GraphSampler[CandidateT]Create a graph sampler configured for classic beam search.
Maintains parallel reasoning paths by keeping a “beam” of the top k best trials from the previous step.
Parameters:
transform(TransformLike[list[Trial[CandidateT]], CandidateT]) –Function that takes trial context and generates new candidates.initial_candidate(CandidateT) –The starting point for the search.beam_width(int, default:3) –Number of top candidates to keep at each step (the ‘k’).branching_factor(int, default:3) –How many new candidates to generate from each beam trial.parent_depth(int, default:10) –Number of ancestors to include in context for refinement.
Returns:
GraphSampler[CandidateT]–A configured GraphSampler instance.
create_sampler
Section titled “create_sampler”create_sampler(config: dict[str, Any]) -> Sampler[t.Any]Create a sampler from a configuration dict.
This enables JSON-based sampler configuration for API endpoints.
Parameters:
config(dict[str, Any]) –Configuration dict with:- “type”: The registered sampler type name
- “params”: Optional dict of parameters for the factory function
Returns:
Sampler[Any]–Configured Sampler instance.
Raises:
ValueError–If the sampler type is not registered.
Example
sampler = create_sampler({ “type”: “temperature_search”, “params”: { “base_model”: “openai/gpt-4”, “temperatures”: [0.0, 0.5, 1.0] } })
fuzzing_sampler
Section titled “fuzzing_sampler”fuzzing_sampler( mutators: list[TransformLike[CandidateT, CandidateT]], initial_seeds: list[CandidateT], *, crossover_mutator: TransformLike[ tuple[CandidateT, CandidateT], CandidateT ] | None = None, selection_strategy: Literal[ "weighted", "uniform", "ucb" ] = "weighted", retention_threshold: float = 0.5, max_pool_size: int = 100, candidates_per_iteration: int = 1,) -> FuzzingSampler[CandidateT]Create a fuzzing sampler for adversarial prompt generation.
Implements coverage-guided fuzzing where successful mutations are retained in a growing seed pool. Seeds that produce more successful offspring are selected more frequently.
Parameters:
mutators(list[TransformLike[CandidateT, CandidateT]]) –List of mutation transforms (expand, shorten, rephrase, generate).initial_seeds(list[CandidateT]) –Starting seed templates.crossover_mutator(TransformLike[tuple[CandidateT, CandidateT], CandidateT] | None, default:None) –Optional transform for combining two seeds.selection_strategy(Literal['weighted', 'uniform', 'ucb'], default:'weighted') –Seed selection method. “weighted” - favor seeds with higher success rates “uniform” - random selection “ucb” - Upper Confidence Bound (explore-exploit balance)retention_threshold(float, default:0.5) –Minimum score to add mutation to pool.max_pool_size(int, default:100) –Maximum seeds to keep (prunes least successful).candidates_per_iteration(int, default:1) –How many candidates to generate per iteration.
Returns:
FuzzingSampler[CandidateT]–A configured FuzzingSampler instance.
Example
sampler = fuzzing_sampler( mutators=[expand_mutator, shorten_mutator, rephrase_mutator], initial_seeds=["You are a helpful assistant...", "Ignore previous..."], retention_threshold=0.5,)graph_neighborhood_sampler
Section titled “graph_neighborhood_sampler”graph_neighborhood_sampler( transform: TransformLike[ list[Trial[CandidateT]], CandidateT ], initial_candidate: CandidateT, *, neighborhood_depth: int = 2, frontier_size: int = 5, branching_factor: int = 3,) -> GraphSampler[CandidateT]Create a graph sampler with local neighborhood context.
The trial context includes trials in the local neighborhood up to 2h-1 distance away, where h is the neighborhood depth.
See: “Graph of Attacks” - https://arxiv.org/pdf/2504.19019v1
Parameters:
transform(TransformLike[list[Trial[CandidateT]], CandidateT]) –Function that takes neighborhood context and generates candidates.initial_candidate(CandidateT) –The starting point for the search.neighborhood_depth(int, default:2) –Depth ‘h’ for calculating neighborhood size.frontier_size(int, default:5) –Number of top candidates to form the next frontier.branching_factor(int, default:3) –How many candidates to generate from each leaf.
Returns:
GraphSampler[CandidateT]–A configured GraphSampler instance.
iterative_sampler
Section titled “iterative_sampler”iterative_sampler( transform: TransformLike[ list[Trial[CandidateT]], CandidateT ], initial_candidate: CandidateT, *, branching_factor: int = 1, parent_depth: int = 10,) -> GraphSampler[CandidateT]Create a graph sampler for simple iterative refinement.
A single-path sampler that keeps only the best candidate at each step (k=1 pruning). Useful for greedy hill-climbing style optimization.
Parameters:
transform(TransformLike[list[Trial[CandidateT]], CandidateT]) –Function that takes trial context and generates new candidates.initial_candidate(CandidateT) –The starting point for the search.branching_factor(int, default:1) –How many candidates to generate each iteration.parent_depth(int, default:10) –Number of ancestors to include in context for refinement.
Returns:
GraphSampler[CandidateT]–A configured GraphSampler instance with k=1 pruning.
list_samplers
Section titled “list_samplers”list_samplers() -> list[str]List all registered sampler type names.
mapelites_sampler
Section titled “mapelites_sampler”mapelites_sampler( mutator: TransformLike[ tuple[CandidateT, MutationTarget], CandidateT ], initial_candidates: list[CandidateT], feature_dimensions: list[list[str]], *, selection_strategy: Literal[ "uniform", "sparse" ] = "uniform", candidates_per_iteration: int = 1,) -> MAPElitesSampler[CandidateT]Create a MAP-Elites sampler for quality-diversity optimization.
MAP-Elites maintains a grid of “elites” - the best candidate found for each combination of behavioral features. This enables diverse exploration while still optimizing for quality.
Parameters:
mutator(TransformLike[tuple[CandidateT, MutationTarget], CandidateT]) –Transform that takes (parent_candidate, target) and generates a mutated candidate targeting the specified feature values.initial_candidates(list[CandidateT]) –Seed candidates to start the archive.feature_dimensions(list[list[str]]) –List of feature value lists defining the grid. Example: [[“risk1”, “risk2”], [“style1”, “style2”]] creates a 2*2 grid.selection_strategy(Literal['uniform', 'sparse'], default:'uniform') –Parent selection method. “uniform” - random selection from archive “sparse” - prioritize under-explored regionscandidates_per_iteration(int, default:1) –How many candidates to generate per iteration.
Returns:
MAPElitesSampler[CandidateT]–A configured MAPElitesSampler instance.
Example
sampler = mapelites_sampler( mutator=my_mutation_transform, initial_candidates=["Start prompt"], feature_dimensions=[ ["violence", "fraud", "hacking"], # Risk categories ["roleplay", "authority", "emotion"], # Attack styles ],)register_sampler
Section titled “register_sampler”register_sampler( name: str,) -> t.Callable[ [t.Callable[..., Sampler[t.Any]]], t.Callable[..., Sampler[t.Any]],]Decorator to register a sampler factory function.
Parameters:
name(str) –The type name for this sampler (used in JSON config).
Example
@register_sampler(“temperature_search”) def temperature_search(base_model: str, …) -> GridSampler: …
strategy_library_sampler
Section titled “strategy_library_sampler”strategy_library_sampler( strategy_transform: TransformLike[dict[str, Any], str], extraction_transform: TransformLike[ dict[str, Any], Strategy | None ], embedding_transform: TransformLike[str, list[float]], strategy_store: StrategyStore | None = None, *, exploration_rate: float = 0.3, top_k_strategies: int = 5, retention_threshold: float = 0.7, candidates_per_iteration: int = 1,) -> StrategyLibrarySamplerCreate a strategy library sampler for lifelong adversarial learning.
Implements the core approach from AutoDAN-Turbo: maintaining a growing library of attack strategies that can be retrieved and combined.
Parameters:
strategy_transform(TransformLike[dict[str, Any], str]) –Transform that generates attacks from (goal, strategies).extraction_transform(TransformLike[dict[str, Any], Strategy | None]) –Transform that extracts strategies from successful attacks.embedding_transform(TransformLike[str, list[float]]) –Transform that computes embeddings for text.strategy_store(StrategyStore | None, default:None) –Persistent strategy storage (created if None).exploration_rate(float, default:0.3) –Probability of exploring vs exploiting (0.0-1.0).top_k_strategies(int, default:5) –Number of similar strategies to retrieve.retention_threshold(float, default:0.7) –Minimum score to extract new strategies.candidates_per_iteration(int, default:1) –How many candidates to generate per iteration.
Returns:
StrategyLibrarySampler–A configured StrategyLibrarySampler instance.
Example
sampler = strategy_library_sampler( strategy_transform=attack_generator, extraction_transform=strategy_extractor, embedding_transform=embed_text, exploration_rate=0.3,)