Skip to content

dreadnode.models

API reference for the dreadnode.models module.

Model loading and storage.

LocalModel(
name: str, storage: Storage, version: str | None = None
)

Model stored in CAS, usable without package installation.

This class provides a way to work with models stored in the Content-Addressable Storage without requiring them to be installed as Python packages with entry points.

Example

from dreadnode.models import LocalModel from dreadnode.storage import Storage

storage = Storage()

from transformers import AutoModelForSequenceClassification hf_model = AutoModelForSequenceClassification.from_pretrained(“bert-base-uncased”) local_model = LocalModel.from_hf(hf_model, “my-bert”, storage)

model = local_model.to_hf() tokenizer = local_model.tokenizer()

Load a local model by name.

Parameters:

  • name (str) –Model name.
  • storage (Storage) –Storage instance for CAS access.
  • version (str | None, default: None ) –Specific version to load. If None, loads latest.
architecture: str | None

Model architecture.

files: list[str]

List of artifact file paths.

framework: str

Model framework (safetensors, pytorch, onnx, etc.).

manifest: ModelManifest

Load and cache the manifest.

task: str | None

Model task type.

from_dir(
source_dir: str | Path,
storage: Storage,
*,
name: str | None = None,
version: str | None = None,
) -> LocalModel

Store a model source directory described by model.yaml in CAS.

from_hf(
model: PreTrainedModel,
name: str,
storage: Storage,
*,
tokenizer: PreTrainedTokenizer | None = None,
format: Literal[
"safetensors", "pytorch"
] = "safetensors",
task: str | None = None,
version: str = "0.1.0",
) -> LocalModel

Store a HuggingFace model in CAS and return LocalModel.

Parameters:

  • model (PreTrainedModel) –HuggingFace PreTrainedModel to store.
  • name (str) –Name for the model.
  • storage (Storage) –Storage instance for CAS access.
  • tokenizer (PreTrainedTokenizer | None, default: None ) –Optional tokenizer to save alongside model.
  • format (Literal['safetensors', 'pytorch'], default: 'safetensors' ) –Save format (safetensors or pytorch).
  • task (str | None, default: None ) –Task type for manifest.
  • version (str, default: '0.1.0' ) –Version string.

Returns:

  • LocalModel –LocalModel instance for the stored model.

Example

from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained(“gpt2”) tokenizer = AutoTokenizer.from_pretrained(“gpt2”) local = LocalModel.from_hf(model, “my-gpt2”, storage, tokenizer=tokenizer)

model_path() -> Path

Get the local path to the model directory.

Reconstructs the model directory structure from CAS blobs.

Returns:

  • Path –Path to local model directory.
publish(version: str | None = None) -> None

Create a DN package for signing and distribution.

Parameters:

  • version (str | None, default: None ) –Version for the package. If None, uses current version.

Raises:

  • NotImplementedError –Package creation not yet implemented.
to_hf(
*,
trust_remote_code: bool = False,
torch_dtype: Any = None,
device_map: str | None = None,
**kwargs: Any,
) -> PreTrainedModel

Load as HuggingFace PreTrainedModel.

Parameters:

  • trust_remote_code (bool, default: False ) –Whether to trust remote code.
  • torch_dtype (Any, default: None ) –Torch dtype for model weights.
  • device_map (str | None, default: None ) –Device map for model parallelism.
  • **kwargs (Any, default: {} ) –Additional arguments for from_pretrained.

Returns:

  • PreTrainedModel –HuggingFace PreTrainedModel.
tokenizer(
*, trust_remote_code: bool = False, **kwargs: Any
) -> PreTrainedTokenizer

Load the associated tokenizer.

Parameters:

  • trust_remote_code (bool, default: False ) –Whether to trust remote code.
  • **kwargs (Any, default: {} ) –Additional arguments for from_pretrained.

Returns:

  • PreTrainedTokenizer –HuggingFace PreTrainedTokenizer.
Model(
name: str,
storage: Storage | None = None,
version: str | None = None,
)

Published model loader backed by local storage manifests.

load_model(
path: str | Path,
*,
model_name: str | None = None,
storage: Storage | None = None,
task: str | None = None,
format: Literal[
"safetensors", "pytorch"
] = "safetensors",
version: str | None = None,
**kwargs: Any,
) -> LocalModel

Load a model from HuggingFace Hub or a local source directory.

Parameters:

  • path (str | Path) –HuggingFace model path or a local model source directory.
  • model_name (str | None, default: None ) –Name to store the model as locally. Defaults to the path.
  • storage (Storage | None, default: None ) –Storage instance. If None, creates default storage.
  • task (str | None, default: None ) –Task type for the model.
  • format (Literal['safetensors', 'pytorch'], default: 'safetensors' ) –Storage format (safetensors or pytorch).
  • version (str | None, default: None ) –Version string for the stored model.
  • **kwargs (Any, default: {} ) –Additional arguments passed to from_pretrained.

Returns:

  • LocalModel –LocalModel instance with the loaded model.

Example

from dreadnode.models import load_model

model = load_model(“bert-base-uncased”, task=“classification”) hf_model = model.to_hf()