User-Defined Models
Early Access
Strikes is currently in early access with trusted partners. Contact us for early access.
Strikes supports standard inference models for quick testing, but you'll likely want to deploy agents using your preferred inference providers, endpoints, and API keys. This is known as User Defined Models. The CLI provides a set of commands under dreadnode models
to create and manage local configurations. To use these models in Strike runs, pass -m/--model <id>
to dreadnode agent deploy
. The configuration will be sent to the server, used for the duration of the run, and discarded afterward.
The Strikes subsystem proxies inference traffic using our Rigging library, specifically with the LiteLLM generator. We apply the same "generator id" concept from Rigging to configure inference on the Strike dropship. Typically, this involves identifying the appropriate generator id for your model in Rigging and then installing it using the CLI.
Example#
Suppose you have a Deepseek API key and want to use one of their models in my agent. After reviewing the LiteLLM docs and the Deepseek model list, you want to make both deepseek-chat
and deepseek-reasoner
available.
-
Start by installing the models with the following commands:
Wrapping the$ dreadnode model add --id deepseek-r1 -n "Deepseek R1" -g deepseek/deepseek-reasoner -k '$DEEPSEEK_API_KEY' 🔧 Added model deepseek-r1 in /Users/nick/.dreadnode/models.yml $ dreadnode model add --id deepseek-v3 -n "Deepseek V3" -g deepseek/deepseek-chat -k '$DEEPSEEK_API_KEY' 🔧 Added model deepseek-v3 in /Users/nick/.dreadnode/models.yml $ dreadnode model list â•â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”¬â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”¬â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”¬â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”¬â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â”€â•® │ ID │ Name │ Provider │ Generator ID │ API Key │ ├─────────────┼─────────────┼──────────┼────────────────────────────┼───────────────────┤ │ deepseek-r1 │ Deepseek R1 │ - │ deepseek/deepseek-reasoner │ $DEEPSEEK_API_KEY │ │ deepseek-v3 │ Deepseek V3 │ - │ deepseek/deepseek-chat │ $DEEPSEEK_API_KEY │ ╰─────────────┴─────────────┴──────────┴────────────────────────────┴───────────────────╯
-k
with avoids resolving the API key right now, and tells the CLI to pass it from my ENV anytime I use one of these models. -
Now you can pass these models directly to your agent when you deploy it:
$ dreadnode agent deploy -m deepseek-r1 â•â”€ run f7eb54b0-0cda-41cf-a15e-d3991be90531 ──────────────────────────────────────────────╮ │ │ │ status completed │ │ strike Simple (simple) │ │ type network-ctf │ │ │ │ model deepseek-r1 │ │ agent demo [chocolate-condor] (rev 1) │ │ image dev-registry.dreadnode.io/monoxgas/agents/demo:c39f7c1a │ │ notes - │ │ │ │ duration 57.8s │ │ start Tue Jan 21 17:47:01 2025 │ │ end Tue Jan 21 17:47:59 2025 │ │ │ │ │ │ zone status duration inferences outputs score │ │ ──────────────────────────────────────────────────────────────── │ │ web completed 31.0s 1 1 0 │ │ database completed 43.0s 1 2 0 │ │ │ ╰─────────────────────────────────────────────────────────────────────────────────────────╯
CLI storage
The CLI stores all user-defined models in ~/.dreadnode/models.yml
, which you can manually edit and share.