dreadnode.models
API reference for the dreadnode.models module.
Model loading and storage.
LocalModel
Section titled “LocalModel”LocalModel( name: str, storage: Storage, version: str | None = None)Model stored in CAS, usable without package installation.
This class provides a way to work with models stored in the Content-Addressable Storage without requiring them to be installed as Python packages with entry points.
Example
from dreadnode.models import LocalModel from dreadnode.storage import Storage
storage = Storage()
Save a HuggingFace model to CAS
Section titled “Save a HuggingFace model to CAS”from transformers import AutoModelForSequenceClassification hf_model = AutoModelForSequenceClassification.from_pretrained(“bert-base-uncased”) local_model = LocalModel.from_hf(hf_model, “my-bert”, storage)
Load and use
Section titled “Load and use”model = local_model.to_hf() tokenizer = local_model.tokenizer()
Load a local model by name.
Parameters:
name(str) –Model name.storage(Storage) –Storage instance for CAS access.version(str | None, default:None) –Specific version to load. If None, loads latest.
architecture
Section titled “architecture”architecture: str | NoneModel architecture.
files: list[str]List of artifact file paths.
framework
Section titled “framework”framework: strModel framework (safetensors, pytorch, onnx, etc.).
manifest
Section titled “manifest”manifest: ModelManifestLoad and cache the manifest.
task: str | NoneModel task type.
from_dir
Section titled “from_dir”from_dir( source_dir: str | Path, storage: Storage, *, name: str | None = None, version: str | None = None,) -> LocalModelStore a model source directory described by model.yaml in CAS.
from_hf
Section titled “from_hf”from_hf( model: PreTrainedModel, name: str, storage: Storage, *, tokenizer: PreTrainedTokenizer | None = None, format: Literal[ "safetensors", "pytorch" ] = "safetensors", task: str | None = None, version: str = "0.1.0",) -> LocalModelStore a HuggingFace model in CAS and return LocalModel.
Parameters:
model(PreTrainedModel) –HuggingFace PreTrainedModel to store.name(str) –Name for the model.storage(Storage) –Storage instance for CAS access.tokenizer(PreTrainedTokenizer | None, default:None) –Optional tokenizer to save alongside model.format(Literal['safetensors', 'pytorch'], default:'safetensors') –Save format (safetensors or pytorch).task(str | None, default:None) –Task type for manifest.version(str, default:'0.1.0') –Version string.
Returns:
LocalModel–LocalModel instance for the stored model.
Example
from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained(“gpt2”) tokenizer = AutoTokenizer.from_pretrained(“gpt2”) local = LocalModel.from_hf(model, “my-gpt2”, storage, tokenizer=tokenizer)
model_path
Section titled “model_path”model_path() -> PathGet the local path to the model directory.
Reconstructs the model directory structure from CAS blobs.
Returns:
Path–Path to local model directory.
publish
Section titled “publish”publish(version: str | None = None) -> NoneCreate a DN package for signing and distribution.
Parameters:
version(str | None, default:None) –Version for the package. If None, uses current version.
Raises:
NotImplementedError–Package creation not yet implemented.
to_hf( *, trust_remote_code: bool = False, torch_dtype: Any = None, device_map: str | None = None, **kwargs: Any,) -> PreTrainedModelLoad as HuggingFace PreTrainedModel.
Parameters:
trust_remote_code(bool, default:False) –Whether to trust remote code.torch_dtype(Any, default:None) –Torch dtype for model weights.device_map(str | None, default:None) –Device map for model parallelism.**kwargs(Any, default:{}) –Additional arguments for from_pretrained.
Returns:
PreTrainedModel–HuggingFace PreTrainedModel.
tokenizer
Section titled “tokenizer”tokenizer( *, trust_remote_code: bool = False, **kwargs: Any) -> PreTrainedTokenizerLoad the associated tokenizer.
Parameters:
trust_remote_code(bool, default:False) –Whether to trust remote code.**kwargs(Any, default:{}) –Additional arguments for from_pretrained.
Returns:
PreTrainedTokenizer–HuggingFace PreTrainedTokenizer.
Model( name: str, storage: Storage | None = None, version: str | None = None,)Published model loader backed by local storage manifests.
load_model
Section titled “load_model”load_model( path: str | Path, *, model_name: str | None = None, storage: Storage | None = None, task: str | None = None, format: Literal[ "safetensors", "pytorch" ] = "safetensors", version: str | None = None, **kwargs: Any,) -> LocalModelLoad a model from HuggingFace Hub or a local source directory.
Parameters:
path(str | Path) –HuggingFace model path or a local model source directory.model_name(str | None, default:None) –Name to store the model as locally. Defaults to the path.storage(Storage | None, default:None) –Storage instance. If None, creates default storage.task(str | None, default:None) –Task type for the model.format(Literal['safetensors', 'pytorch'], default:'safetensors') –Storage format (safetensors or pytorch).version(str | None, default:None) –Version string for the stored model.**kwargs(Any, default:{}) –Additional arguments passed to from_pretrained.
Returns:
LocalModel–LocalModel instance with the loaded model.
Example
from dreadnode.models import load_model
Load and store a HuggingFace model
Section titled “Load and store a HuggingFace model”model = load_model(“bert-base-uncased”, task=“classification”) hf_model = model.to_hf()