Skip to content

Packages & Capabilities

Load, publish, and reuse datasets, models, environments, and local capabilities from the Python SDK.

Use this page when you need to answer one of two questions:

  • “How do I load a reusable artifact into Python?”
  • “How do I publish something I built locally so the rest of the platform can use it?”

Dreadnode has two closely related concepts:

  • packages are versioned, durable artifacts such as datasets, models, and environments
  • capabilities are reusable agent bundles made of agent prompts, tools, skills, MCP definitions, and runtime metadata such as dependencies and checks

For compatibility, the local SDK can still resolve exported Hook objects from hooks/ when an older capability bundle includes them. Treat that as legacy loader behavior, not as the primary v1 authoring surface.

The SDK uses slightly different entry points depending on which thing you are working with.

GoalUse this APINotes
Pull a published package locallydn.pull_package(["dataset://org/name:version"])Makes the package available in local storage or cache
Load a pulled dataset or model packagedn.load_package("dataset://org/name@version") or dn.load_package("model://org/name@version")Opens a package that is already locally available
Load a local capability directorydn.load_capability("./capabilities/recon-kit")Capabilities are loaded from disk or the local capability search path
Publish a capabilitydn.push_capability("./capabilities/recon-kit")Builds and pushes an OCI-backed capability bundle
Publish a dataset, model, or environmentdn.push_dataset(...), dn.push_model(...), dn.push_environment(...)Environments are the SDK-side package name for tasks
Inspect what is available programmaticallydn.list_registry("capabilities"), "datasets", "models", or "environments"Combined local and remote registry discovery when configured

There are two common reference styles in the SDK:

  • load_package() uses scheme://org/name@version
  • pull_package() uses OCI-style scheme://org/name:version

Examples:

import dreadnode as dn
dn.pull_package(
[
"dataset://acme/support-evals:1.0.0",
"model://acme/vuln-classifier:2.1.0",
"capability://acme/recon-kit:1.2.0",
"environment://acme/sqli-lab:1.0.0",
]
)
dataset = dn.load_package("dataset://acme/[email protected]")
model = dn.load_package("model://acme/[email protected]")

Capabilities are the exception: there is no dn.load_package("capability://...") convenience workflow for active use. The normal path is to pull or install them first, then load them locally with dn.load_capability(...).

For published datasets and models, the SDK workflow is two-step:

  1. pull_package() to make the package available locally
  2. load_package() or dn.load() to open that local package as a Python object

If you skip the pull step and the package is not already present in local storage, the loader will raise and tell you which pull_package() call to use.

import dreadnode as dn
dn.configure(
server="https://app.dreadnode.io",
api_key="dn_...",
organization="acme",
)
dn.pull_package(
[
"dataset://acme/support-evals:1.0.0",
"model://acme/vuln-classifier:2.1.0",
]
)
dataset = dn.load_package("dataset://acme/[email protected]")
model = dn.load_package("model://acme/[email protected]")
print(dataset.to_pandas().head())
print(model)

Use pinned versions when reproducibility matters. A benchmark, training job, or optimization run should not rely on an implicitly moving “latest” package.

If you prefer the unified loader, dn.load("dataset://...") and dn.load("model://...") use the same local-package expectation.

Capabilities are loaded from a directory on disk. The resulting Capability object gives you resolved agents, tools, skills, MCP server definitions, dependency metadata, and health checks. Legacy bundles may also expose capability.hooks.

import dreadnode as dn
from dreadnode.agents import Agent
dn.configure()
capability = dn.load_capability("./capabilities/threat-hunting")
print(capability.name, capability.version)
print([agent.name for agent in capability.agents])
print([tool.name for tool in capability.tools])
print(capability.dependencies.python)
print([check.name for check in capability.checks])
print([server.name for server in capability.mcp_server_defs])
agent = Agent(
name="triage",
model="openai/gpt-4o-mini",
instructions="Investigate the indicator and summarize the risk.",
tools=capability.tools,
)

Useful fields on the resolved object:

  • capability.manifest for raw manifest metadata
  • capability.agents for the entry agents declared in the bundle
  • capability.tools for exported Python tools
  • capability.skills_paths for attached SKILL.md directories
  • capability.mcp_server_defs for MCP runtime definitions
  • capability.dependencies for declared sandbox install metadata
  • capability.checks for declared pre-flight checks
  • capability.hooks for exported hook objects on legacy bundles that still ship them

For capability authoring details, see Custom Capabilities.

Once a local source directory is valid, the Python SDK can publish it directly.

import dreadnode as dn
dn.configure(
server="https://app.dreadnode.io",
api_key="dn_...",
organization="acme",
)
cap = dn.push_capability("./capabilities/recon-kit", publish=True)
dataset = dn.push_dataset("./datasets/support-evals", publish=True)
model = dn.push_model("./models/vuln-classifier", publish=False)
environment = dn.push_environment("./tasks/sqli-lab", publish=True)
print(cap.name, cap.version, cap.status)
print(dataset.package_name, dataset.package_version)
print(model.package_name, model.package_version)
print(environment.package_name, environment.package_version)

Three details matter:

  • bare names are prefixed with the active organization when you are configured against a server
  • publish=True updates visibility after upload; it is not just a local build flag
  • skip_upload=True is useful for validating buildability without pushing to a remote registry

For datasets and models, the SDK uploads the referenced blobs to storage, then pushes a manifest image. For capabilities and environments, the SDK pushes OCI-backed bundles.

In the CLI and app, the user-facing concept is usually a task or environment. In the SDK’s package layer, the publish helper is push_environment(...).

Use this translation mentally:

  • app / CLI: task or environment
  • SDK package layer: environment package
  • registry URI: environment://org/name:version

Most teams end up doing this:

  1. Author a capability or dataset locally.
  2. Validate it in Python.
  3. Publish it with push_capability(), push_dataset(), push_model(), or push_environment().
  4. Pin the published version in evaluations, optimization jobs, or training jobs.

That keeps your runtime behavior reproducible and makes it easy to answer “which version did we actually run?”