Tools are Python functions or class methods that agents call to interact with external systems, access files, execute code, and perform actions during execution.Why use tools:
Extend Capabilities - Give agents access to filesystems, APIs, databases, and custom functionality beyond LLM reasoning
Stateless Functions - Use the @tool decorator for simple, standalone operations without shared state
Stateful Toolsets - Group related tools with shared configuration, connections, or state using Toolset classes
When to use tools:Use @tool for simple standalone functions. Use Toolset when tools need shared configuration (credentials, paths) or state (connections, caches). Use built-in toolsets like Filesystem for common operations.This guide covers the @tool decorator, toolsets, built-in tools, tool configuration, variants for access control, and best practices.
Tools in the Dreadnode SDK provide agent-specific features like variants for access control, state management, and built-in toolsets for common operations.
Copy
Ask AI
import dreadnode as dn@dn.tooldef get_stock_price(symbol: str) -> float: """Gets the current price of a stock symbol.""" # ... logic to call a financial API ... return 150.75
Use the @tool decorator for simple, self-contained functions. Use a Toolset when you need to group related tools (like db.query, db.insert) or manage shared state (like a database connection).
Define tools using the @dn.tool decorator with type hints and a docstring:
Copy
Ask AI
from typing import Annotatedimport dreadnode as dn@dn.tooldef search_documentation( query: Annotated[str, "The search query"], max_results: Annotated[int, "Maximum number of results to return"] = 10) -> list[dict]: """ Search the documentation for relevant articles. Returns a list of matching documents with titles and URLs. """ # Implementation here return [{"title": "...", "url": "..."}]
Key elements:
Type hints: Required for generating the tool schema
Annotated: Provides parameter descriptions for the agent
Use the catch parameter to control how exceptions are handled. For graceful task termination, use stop conditions or hooks rather than raising from tools.
@dn.toolasync def fetch_async(url: str) -> dict: """Fetches data asynchronously.""" async with aiohttp.ClientSession() as session: async with session.get(url) as response: return await response.json()
Use async def for tools that perform I/O operations to enable efficient concurrent execution.
Toolsets support variants to conditionally expose tools. This is useful for permission models:
Copy
Ask AI
from dreadnode.agent.tools import Toolsetimport dreadnode as dnclass FileManager(Toolset): """Manages file operations with different permission levels.""" variant: str = "read" # Can be "read" or "write" @dn.tool_method(variants=["read", "write"]) def read_file(self, path: str) -> str: """Read a file's contents.""" with open(path) as f: return f.read() @dn.tool_method(variants=["write"]) def write_file(self, path: str, content: str) -> str: """Write content to a file.""" with open(path, 'w') as f: f.write(content) return f"Wrote {len(content)} bytes to {path}"# Read-only accessread_only = FileManager(variant="read")read_only.get_tools() # Returns only read_file# Full accessfull_access = FileManager(variant="write")full_access.get_tools() # Returns both methods
Toolsets can act as context managers for automatic resource setup and cleanup:
Copy
Ask AI
from dreadnode.agent.tools import Toolsetimport dreadnode as dnclass SSHConnection(Toolset): """Tools for executing commands over SSH.""" host: str username: str _connection: Any = None async def __aenter__(self): self._connection = await connect_ssh(self.host, self.username) return self async def __aexit__(self, *args): if self._connection: await self._connection.close() @dn.tool_method async def run_command(self, command: str) -> str: """Execute a command on the remote host.""" return await self._connection.execute(command)
Toolsets with __aenter__/__aexit__ are automatically made re-entrant by the SDK. Multiple context manager entries only call the actual setup/teardown once.
The Model Context Protocol (MCP) is an open standard for language models to interact with external tools. The SDK supports both consuming and serving MCP tools.
Connect to MCP servers and use their tools in your agents:
Copy
Ask AI
import dreadnode as dnimport rigging as rg# Via stdio (local process)async with rg.mcp("stdio", command="my-mcp-server", args=["--port", "stdio"]) as mcp: agent = dn.Agent( name="mcp-user", model="gpt-4o-mini", tools=[*mcp.tools] ) result = await agent.run("Use the MCP tools...")# Via SSE (HTTP endpoint)async with rg.mcp("sse", url="http://localhost:8001/mcp") as mcp: agent = dn.Agent( name="mcp-user", model="gpt-4o-mini", tools=[*mcp.tools] )
import dreadnode as dnimport rigging as rg@dn.tooldef write_file(filename: str, content: str) -> str: """Writes content to a local file.""" with open(filename, "w") as f: f.write(content) return f"Wrote {len(content)} bytes to {filename}"if __name__ == "__main__": rg.as_mcp([write_file], name="File Tools").run()
Copy
Ask AI
# Add to Claude Codeclaude mcp add file_writer -- uv run --with rigging file_writer.py
After agent execution, inspect what tools were called:
Copy
Ask AI
result = await agent.run("Analyze the codebase")for message in result.messages: if hasattr(message, 'tool_calls') and message.tool_calls: for call in message.tool_calls: print(f"Tool: {call.function.name}") print(f"Args: {call.function.arguments}")