Skip to content

Chat Models

Manage the inference models that appear in your assistant picker and understand how this user preference surface differs from registry models.

Chat Models is the settings surface for curating which inference models appear in your assistant picker.

In the App IA, this page lives under Account inside Settings.

This page is about user-scoped model preferences. It is not the same thing as the shared Models registry page under Hub, and it is not the same thing as picking the currently active model in the TUI model picker.

Use Chat Models to:

  • add or remove hosted dn/... models and provider-backed openai/..., anthropic/..., and similar model IDs from your assistant picker
  • verify whether each enabled model is currently usable based on required provider keys
  • keep a personal default set of allowed chat models without changing the shared model registry

The current UI shows one row per enabled model with:

  • the full model ID
  • the inferred provider
  • a readiness state such as Ready or Needs OPENAI_API_KEY
  • add and remove controls for the enabled list

If a model depends on a provider key that is not configured, the page surfaces that requirement and links you to Secrets.

The durable preference is the user’s enabled_model_ids list.

  • if you have never configured it, the backend can return the broader available model set
  • adding a model validates that the model ID is recognized
  • removing models must still leave at least one enabled model
  • the response also carries readiness information such as required provider keys
ConceptWhat it means
user-scopedyour enabled model list is saved per user, not per organization
hosted dn/ modelsDreadnode-managed inference models available through the platform
BYOK provider modelsprovider-hosted models such as openai/... or anthropic/... gated by secrets
readinesswhether the required provider key is currently configured
model picker optionsthe set of models you allow to appear in the assistant UI

The settings page defines the allowed set. The currently active model for a given session is still chosen at runtime in the assistant or TUI.

  1. enable the models you want to appear in your assistant picker
  2. fix any missing provider keys in Secrets
  3. switch the live session’s model in the assistant or TUI model picker
  4. come back here when you want to change the saved shortlist rather than one live session

Use Models when you are browsing or publishing versioned model artifacts in the shared registry.

Use Chat Models when you are controlling which inference backends appear in your own interactive assistant picker.

Use TUI Models and Selection when you want to switch the active model for the current session or process.

Use Chat Models when you want to change the saved shortlist that appears in picker-style assistant surfaces. The local TUI /models browser can still browse the broader API-synced supported set for ad hoc local testing.

Use Secrets to configure provider keys such as OPENAI_API_KEY or ANTHROPIC_API_KEY.

Use Chat Models to confirm whether those keys make a specific model usable.

  • this page is backed by user preferences rather than org-wide settings
  • hosted dn/ models and provider-backed BYOK models can appear together in the same picker
  • missing provider keys make a model unavailable without removing it from the saved list
  • at least one chat model must remain enabled
  • agent-specific model settings can still override the interactive picker at execution time

Use Settings for the surrounding settings shell, Secrets for provider keys, Models for stored registry artifacts, and SDK API Client when you need list_system_models() or get_user_preferences() from Python.