Models
Browse, compare, and manage shared model artifacts in the Dreadnode platform registry.
Models are versioned artifacts published into an organization registry so teams can share trained weights, compare releases, attach metrics, and download exact versions later.
In the App IA, this page lives under Hub.
This page is about stored model artifacts in the registry. It is not the same thing as the
interactive runtime model picker described in TUI Models and Selection,
the per-user Chat Models settings surface, or the catalog of hosted dn/
system models returned by the inference APIs.
What the page is for
Section titled “What the page is for”The platform model page is the place to:
- browse model artifacts from your organization and the public catalog
- search, sort, and filter by tags, license, task category, framework, and size
- inspect version metadata such as architecture, framework, aliases, and metrics
- compare multiple versions side by side before promoting one
- download, publish, unpublish, or delete versions you own
Each model card groups many versions under one name so you can compare releases without bouncing between separate pages.
Workflow
Section titled “Workflow”Model artifacts are the durable output side of training and fine-tuning workflows.
- inspect and publish a local model package
- compare versions and metrics in the Hub
- optionally assign aliases for human workflows
- pin one exact version in downstream automation
- download that version later when you need the stored artifact locally
That makes the Hub model page a release-management surface, not an inference model selector.
Visibility, versions, and comparison
Section titled “Visibility, versions, and comparison”| Concept | What it means |
|---|---|
| org-scoped | visible only inside the owning organization |
| public | visible across organizations in the shared catalog |
| canonical name | shown as <owner>/<name> when ownership matters |
| pinned reference | use org/name@version when automation must resolve one exact artifact |
| aliases | human-friendly labels that point at a specific version |
| metrics | version-level evaluation numbers you can compare across releases |
Aliases are useful for human workflows, but agents should still resolve down to an explicit version before running reproducible automation.
What agents should assume
Section titled “What agents should assume”- A registry model artifact is a stored package, not the same as picking a hosted or BYOK inference model for a chat runtime.
- Prefer pinned refs such as
org/name@versionover aliases when the result must be reproducible. - Use comparison and metrics to choose between versions before attaching one to downstream jobs or release notes.
- Use the owning org for visibility changes and deletion.
Common workflows
Section titled “Common workflows”dreadnode model inspect ./models/assistant-loradreadnode model compare assistant-lora 0.9.0 1.0.0Use Packages and Registry for artifact operations, TUI Models and Selection and Chat Models when you mean live inference choice instead of stored artifacts, and the SDK API Client when you need model-registry inspection or hosted system-model lookup from Python.