Publishing
Package a trained model as a Dreadnode artifact, write model.yaml, and push a version to the registry.
Publishing a model is two decisions: what goes into the directory, and which framework the registry should record. Everything downstream — version comparison, metric attachment, pulling — operates on what you push here.
The directory shape
Section titled “The directory shape”support-assistant/ model.yaml # required — the manifest model.safetensors config.json tokenizer.json tokenizer_config.json special_tokens_map.jsonEvery file under the directory (except model.yaml and OS junk like .DS_Store) becomes an artifact. Constrain the set explicitly with files: when the directory contains things you don’t want published.
See the manifest reference for every accepted field.
Minimum manifest
Section titled “Minimum manifest”name: support-assistantversion: 0.1.0framework is inferred from the file extensions present, in priority order: any .safetensors → safetensors; otherwise any .onnx → onnx; otherwise any .pt/.pth/.bin → pytorch; otherwise safetensors. A directory with both a PyTorch checkpoint and a safetensors file resolves to safetensors.
Full fine-tune
Section titled “Full fine-tune”Fill in the catalog metadata so the Hub record is useful to someone who didn’t train the model:
name: support-assistantversion: 1.0.0summary: 7B assistant fine-tuned on support tickets.framework: safetensorsarchitecture: LlamaForCausalLMtask: text-generationbase_model: meta-llama/Llama-3.1-8B-Instructdataset_refs:license: apache-2.0language: [en]tags: [assistant, support, sft]task_categories: [conversational]size_category: 1-7Bbase_model and dataset_refs form the training provenance chain — downstream consumers follow the links to understand what went into the weights.
LoRA adapter
Section titled “LoRA adapter”LoRAs are published the same way as a full model, with a smaller file set and a base_model pointer:
name: support-assistant-loraversion: 0.3.0summary: LoRA adapter for Llama-3.1-8B-Instruct, rank 16.framework: safetensorsbase_model: meta-llama/Llama-3.1-8B-Instructdataset_refs:files: - adapter_config.json - adapter_model.safetensors - tokenizer.json - tokenizer_config.json - special_tokens_map.jsonExplicit files: prevents accidentally shipping a full checkpoint alongside the adapter.
ONNX export
Section titled “ONNX export”name: support-classifier-onnxversion: 0.1.0framework: onnxtask: sequence-classificationarchitecture: DistilBertForSequenceClassificationONNX models are usually single-file. Let the discovery rules pick it up, or declare it explicitly with files:.
Inspect before pushing
Section titled “Inspect before pushing”dn model inspect ./support-assistant framework: safetensors task: text-generation architecture: LlamaForCausalLM
Files┃ Path ┃┇ model.safetensors ┇┇ config.json ┇┇ tokenizer.json ┇┇ tokenizer_config.json ┇┇ special_tokens_map.json ┇inspect reads model.yaml, hashes each file, and prints the manifest the registry would record. Use it as a local pre-flight.
Push to the registry
Section titled “Push to the registry”dn model push ./support-assistantPushed acme/[email protected] (sha256:ab3c7f...)The CLI validates the manifest, hashes every artifact, uploads only the files the registry doesn’t already have, and writes the versioned manifest. Re-publishing a checkpoint with a single changed file ships only that file.
Override the registry name with --name, or cross-publish into another organization you have write access to:
dn model push ./support-assistant --name acme-research/support-assistantDry-run
Section titled “Dry-run”dn model push ./support-assistant --skip-uploadRuns every local step and stops before the HTTP upload. Useful for CI validation before paying the bytes.
Publish from Python
Section titled “Publish from Python”import dreadnode as dn
dn.configure(server="https://app.dreadnode.io", api_key="dn_...", organization="acme")
result = dn.push_model("./support-assistant")print(result.package_name, result.package_version)# acme/support-assistant 1.0.0dn.push_model accepts the same skip_upload and name arguments as the CLI. The returned PushResult carries manifest_digest, blobs_uploaded, and blobs_skipped.
Control visibility
Section titled “Control visibility”Models are private to your organization by default. Visibility is name-level — every version of acme/support-assistant shares one setting.
| Action | Command |
|---|---|
| Make the model public | dn model publish support-assistant |
| Restrict it again | dn model unpublish support-assistant |
| Publish at push time | dn model push ./... --publish |
publish and unpublish accept multiple names and reject version-qualified refs — the switch flips the whole family.
What to reach for next
Section titled “What to reach for next”- Shortest end-to-end push → Quickstart
- Compare versions, attach metrics, tag aliases → Versions & metrics
- Browse what’s in the registry and pull a version → Catalog
- Download and load a published model → Using in code
- Every
model.yamlfield → Manifest reference - Every CLI verb →
dn model