Tools in Rigging allow language models to interact with external systems, execute code, or perform well-defined tasks during generation. Rigging v3 introduces a unified tool system, making it easier to define tools and control how they interact with different language models.

For most uses, you can just build any function with type hints, and pass that into a pipeline:

import rigging as rg

def add_numbers(a: int, b: int) -> int:
    """Adds two numbers together."""
    return a + b

chat = (
    await rg.get_generator("openai/gpt-4o-mini")
    .chat("What is 2 + 3?")
    .using(add_numbers) # Pass the function directly
    .run()
)

Defining Tools

Rigging uses function signatures (type hints and docstrings) to automatically generate the necessary schema and description for the language model. If you’d like to make any modifications to your tool’s name, description, or schema, you can use the @tool decorator and pass that into pipelines.

Using @tool for Functions

Decorate any regular Python function (including static methods) with @rigging.tool to make it usable by the Rigging framework.

import typing as t
from typing import Annotated
import rigging as rg
import requests

@rg.tool
def get_weather(city: Annotated[str, "The city name to get weather for"]) -> str:
    """Gets the current weather for a specified city."""
    try:
        city = city.replace(" ", "+")
        # Use a real weather API in practice
        return requests.get(f"http://wttr.in/{city}?format=3").text.strip()
    except Exception as e:
        return f"Failed to get weather: {e}"

# The 'get_weather' object is now a rigging.tool.Tool instance
print(get_weather.name)
# > get_weather
print(get_weather.description)
# > Gets the current weather for a specified city.
print(get_weather.parameters_schema)
# > {'type': 'object', 'properties': {'city': {'title': 'City', 'description': 'The city name to get weather for', 'type': 'string'}}, 'required': ['city']}
  • Type hints are crucial for defining the parameters the model needs to provide.
  • Use typing.Annotated to provide descriptions for parameters where needed.
  • The function’s docstring is used as the tool’s description for the model.

Using @tool_method for Class Methods

If your tool logic resides within a class and needs access to instance state (self), use the @rigging.tool_method decorator instead.

import typing as t
from typing import Annotated
import rigging as rg

class UserProfileManager:
    def __init__(self, api_key: str):
        self.api_key = api_key
        self._users = {"123": {"name": "Alice", "email": "[email protected]"}} # Dummy data

    @rg.tool_method(name="fetch_user_profile") # Optional: override name
    def get_profile(self, user_id: Annotated[str, "The ID of the user to fetch"]) -> dict[str, t.Any] | str:
        """Retrieves a user's profile information using their ID."""
        # Use self.api_key etc. here
        profile = self._users.get(user_id)
        if profile:
            return profile
        return f"User with ID {user_id} not found."

# Instantiate the class
profile_manager = UserProfileManager(api_key="dummy_key")

# Access the tool method *through the instance*
user_profile_tool = profile_manager.get_profile

print(user_profile_tool.name)
# > fetch_user_profile
print(user_profile_tool.description)
# > Retrieves a user's profile information using their ID.

@tool_method correctly handles the self argument, ensuring it’s not included in the schema presented to the language model. Use @tool for static methods if they don’t require self.

Underlying Mechanism

Both decorators use Tool.from_callable() internally to wrap your function/method into a Tool object. This object holds the function, its generated schema, name, and description.

Async vs Sync

You can define your tools as either synchronous or asynchronous functions. Rigging will automatically handle the invocation based on the function’s type. Usually it’s good practice to use async def for all your tool functions, and you can opt into any asynchronous behavior later when you start to scale your pipelines.

import rigging as rg

@rg.tool
async def afetch_data(url: str) -> dict:
    """Fetches data from a given URL (async)"""
    async with aiohttp.ClientSession() as session:
        async with session.get(url) as response:
            return await response.json()

@rg.tool
def fetch_data(url: str) -> dict:
    """Fetches data from a given URL (sync)"""
    response = requests.get(url)
    return response.json()

data = await afetch_data("https://api.example.com/data")
data = fetch_data("https://api.example.com/data")

@rg.prompt(tools=[afetch_data, fetch_data])
def explore_api(url: str) -> rg.Chat:
    "Explore the API at the given URL"
    ...

chat = await explore_api("https://api.example.com/data")

Using Tools in Pipelines

To make tools available during generation, pass them to the ChatPipeline.using() method.

import rigging as rg

# Assume get_weather tool is defined as above
# Assume profile_manager.get_profile tool is defined and instantiated

chat = (
    await rg.get_generator("openai/gpt-4o-mini")
    .chat("What's the weather like in Paris, and what's Alice's email (user ID 123)?")
    .using(
        get_weather,                  # Pass the decorated function directly
        profile_manager.get_profile,  # Pass the bound method from the instance
        mode="auto",                  # Let Rigging choose the best invocation method (API or native)
        max_depth=5                   # Limit recursive tool calls (e.g., if a tool calls another prompt)
    )
    .run()
)

print(chat.conversation)

Passing Class Instances Directly

You can also pass an entire class instance directly to .using(), and Rigging will automatically discover and include all methods decorated with @tool_method:

import rigging as rg

class DatabaseTools:
    def __init__(self, connection_string: str):
        self.connection_string = connection_string

    @rg.tool_method
    def query_users(self, name_filter: str) -> list[dict]:
        """Query users from the database by name filter."""
        # Database query logic here
        return [{"id": 1, "name": "Alice"}, {"id": 2, "name": "Bob"}]

    @rg.tool_method
    def create_user(self, name: str, email: str) -> dict:
        """Create a new user in the database."""
        # User creation logic here
        return {"id": 3, "name": name, "email": email}

# Instantiate the class
db_tools = DatabaseTools("postgresql://localhost/mydb")

# Pass the instance directly - Rigging will find both tool methods
chat = (
    await rg.get_generator("openai/gpt-4o-mini")
    .chat("Find all users with 'Al' in their name, then create a new user named Charlie with email [email protected]")
    .using(db_tools)  # Pass the entire instance
    .run()
)

This is equivalent to passing each tool method individually:

# These two approaches are equivalent:
.using(db_tools)  # Automatic discovery

.using(
    db_tools.query_users,
    db_tools.create_user
)  # Manual specification

When you pass a class instance to .using(), Rigging uses Python’s inspect module to discover all methods decorated with @tool_method. This automatic discovery only includes methods that have been properly decorated - regular methods without the @tool_method decorator will be ignored.

Tool Invocation Modes (mode)

The mode parameter in .using() controls how Rigging interacts with the language model for tool calls:

  • auto (Default): Rigging checks if the model provider supports API-level function calling (like OpenAI, Anthropic, Google). If yes, it uses that native, efficient mechanism. If not, it falls back to xml.
  • api: Forces the use of the provider’s function calling API. Will fail if the provider doesn’t support it.
  • xml: Rigging injects instructions and an XML schema into the prompt, telling the model how to format its output to request a tool call using specific XML tags. Rigging parses this XML.
  • json-in-xml: Similar to xml, but the model is instructed to place a JSON object containing the arguments within the XML tags.
  • json-with-tag: Similar to json, but the JSON object is wrapped in a specific tag (e.g., <tool_call>).
  • json: The model is instructed to output tool calls as raw JSON anywhere in the message content with name and arguments fields

Generally, auto is recommended as it leverages the most efficient method available.

Controlling Recursion (max_depth)

The max_depth parameter limits how many levels deep tool calls can go. If a tool itself triggers another prompt that uses tools, this prevents infinite loops.

Stopping Tool Calls

You may want to use a particular tool, or catch a condition inside a tool, and indicate to any pipelines that it should stop going back to the model for more calls. You can do this by raising a rigging.Stop exception with a message to the model.

import rigging as rg

@rg.tool
def execute_code(code: str) -> str:
    "Finish a task and report a summary of work"
    ...

    if "<flag>" in output: # Stop the model from executing more code
        raise rg.Stop(f"Task finished")

    return output

chat = (
    await rg.get_generator("openai/gpt-4o-mini")
    .chat("Work to accomplish <work>, then finish it.")
    .using(finish_task)
    .run()
)

This stop indication won’t completely halt the pipeline, but it will let it continue to any additional parsing mechanics or custom callbacks which follow tool calling.

MCP Integration (Model Context Protocol)

The Model Context Protocol (MCP) is an open standard for language models to interact with external tools and services. Rigging provides a client to connect to MCP-compliant servers.

Use the rigging.tool.mcp function, specifying the transport method (stdio or sse) and the connection parameters. It returns an async context manager.

Using stdio (Standard Input/Output)

Connect to an MCP server launched as a local process that communicates over standard input/output.

import rigging as rg

# Example: Assuming 'my-mcp-server' is an executable that starts the MCP server
command = "my-mcp-server"
args = ["--port", "stdio"] # Example arguments

async with rg.mcp("stdio", command=command, args=args) as mcp_client:
    # mcp_client.tools contains the list of tools discovered from the server
    print(f"Discovered {len(mcp_client.tools)} MCP tools via stdio.")

    if mcp_client.tools:
        chat = (
            await rg.get_generator("openai/gpt-4o-mini")
            .chat("Use the MCP tool Y.") # Adjust prompt based on available tools
            .using(*mcp_client.tools)
            .run()
        )
        print(chat.conversation)
    else:
        print("No MCP tools found.")

Using sse (Server-Sent Events)

Connect to an MCP server exposed via an HTTP endpoint using Server-Sent Events.

import rigging as rg

# URL of the MCP SSE endpoint
MCP_SSE_URL = "http://localhost:8001/mcp" # Example URL

async with rg.mcp("sse", url=MCP_SSE_URL) as mcp_client:
    # mcp_client.tools contains the list of tools discovered from the server
    print(f"Discovered {len(mcp_client.tools)} MCP tools via SSE.")

    if mcp_client.tools:
        chat = (
            await rg.get_generator("openai/gpt-4o-mini")
            .chat("Use the MCP tool Z.") # Adjust prompt based on available tools
            .using(*mcp_client.tools)
            .run()
        )
        print(chat.conversation)
    else:
        print("No MCP tools found.")

The mcp context manager handles the connection, tool discovery, and communication with the MCP server. Inside the async with block, mcp_client.tools provides the list of discovered Tool objects ready to be used with .using().

Robopages

Robopages is a framework for building and hosting tool-enabled “pages” or APIs. Rigging can dynamically fetch the available tools from a running Robopages server and make them available to your language model.

Use the rigging.tool.robopages function to connect to a Robopages endpoint and retrieve its tools.

import rigging as rg

# URL of your running Robopages server
ROBOPAGES_URL = "http://localhost:8080" # Example URL

try:
    # Fetch tools from the server
    robopages_tools = rg.robopages(ROBOPAGES_URL)

    # Use the fetched tools in a pipeline
    chat = (
        await rg.get_generator("openai/gpt-4o-mini")
        .chat("Use the available Robopages tool to do X.") # Adjust prompt based on available tools
        .using(*robopages_tools) # Unpack the list of tools
        .run()
    )
    print(chat.conversation)

except Exception as e:
    print(f"Failed to connect to Robopages or use tools: {e}")
    # Handle connection errors or cases where no tools are found

This fetches the tool definitions (name, description, parameters) from the Robopages server. When the language model requests one of these tools, Rigging sends the request back to the Robopages server for execution.

Tool Execution Flow

When you call .using():

  1. Rigging prepares the tool definitions based on the selected mode.
  2. For api mode, the definitions (ApiToolDefinition) are passed to the model provider’s API.
  3. For xml or json-in-xml modes, Rigging injects the tool schemas and usage instructions into the system prompt.
  4. During generation, if the model decides to call a tool:
    • In api mode, the provider returns structured tool call information (ApiToolCall).
    • In native modes, Rigging parses the model’s output for the expected XML format (XmlToolCall or JsonInXmlToolCall).
  5. Rigging validates the arguments provided by the model against the tool’s signature using Pydantic.
  6. Your original Python function/method (or the relevant MCP/Robopages call) is executed with the validated arguments.
  7. The return value is formatted into a message (Message with role="tool for API calls, or a user message containing NativeToolResult for native calls) and sent back to the model.
  8. The generation process continues until the model produces a final response without tool calls or the max_depth is reached.

Error Handling

You can control how errors raised within your tool function are handled using the catch parameter in @tool / @tool_method:

  • catch=False (Default): Errors propagate, potentially failing the pipeline run (unless caught by pipeline.catch()).
  • catch=True: Catches any exception, converts it to an error message, and sends that back to the model as the tool result.
  • catch={ValueError, TypeError}: Catches only specified exception types.
@rg.tool(catch=True) # Catch any exception
def potentially_failing_tool(input: str) -> str:
    if input == "fail":
        raise ValueError("Intentional failure")
    return "Success"

Tool State

Since tools defined with @tool_method operate on class instances, they can maintain and modify state across multiple calls within a single pipeline execution.

class CounterTool:
    def __init__(self):
        self.count = 0

    @rg.tool_method
    def increment(self, amount: int = 1) -> str:
        """Increments the counter and returns the new value."""
        self.count += amount
        return f"Counter is now {self.count}"

# Usage
counter = CounterTool()

chat = (
   await
   rg.get_generator(...)
   .chat("Increment twice")
   .using(counter.increment)
   .run()
)

You could also do the same thing by returning a stateful tool function defined in a closure:

def counter_tool():
    count = 0

    @rg.tool
    def increment(amount: int = 1) -> str:
        nonlocal count
        count += amount
        return f"Counter is now {count}"

    return increment

# Usage
counter = counter_tool()

chat = (
   await
   rg.get_generator(...)
   .chat("Increment twice")
   .using(counter.increment)
   .run()
)