Tools in Rigging allow language models to interact with external systems, execute code, or perform well-defined tasks during generation. Rigging v3 introduces a unified tool system, making it easier to define tools and control how they interact with different language models.For most uses, you can just build any function with type hints, and pass that into a pipeline:
Copy
Ask AI
import rigging as rgdef add_numbers(a: int, b: int) -> int: """Adds two numbers together.""" return a + bchat = ( await rg.get_generator("openai/gpt-4o-mini") .chat("What is 2 + 3?") .using(add_numbers) # Pass the function directly .run())
Rigging uses function signatures (type hints and docstrings) to automatically generate the necessary schema and description for the language model. If you’d like to make any modifications to your tool’s name, description, or schema, you can use the @tool decorator and pass that into pipelines.
Decorate any regular Python function (including static methods) with @rigging.tool to make it usable by the Rigging framework.
Copy
Ask AI
import typing as tfrom typing import Annotatedimport rigging as rgimport requests@rg.tooldef get_weather(city: Annotated[str, "The city name to get weather for"]) -> str: """Gets the current weather for a specified city.""" try: city = city.replace(" ", "+") # Use a real weather API in practice return requests.get(f"http://wttr.in/{city}?format=3").text.strip() except Exception as e: return f"Failed to get weather: {e}"# The 'get_weather' object is now a rigging.tool.Tool instanceprint(get_weather.name)# > get_weatherprint(get_weather.description)# > Gets the current weather for a specified city.print(get_weather.parameters_schema)# > {'type': 'object', 'properties': {'city': {'title': 'City', 'description': 'The city name to get weather for', 'type': 'string'}}, 'required': ['city']}
Type hints are crucial for defining the parameters the model needs to provide.
Use typing.Annotated to provide descriptions for parameters where needed.
The function’s docstring is used as the tool’s description for the model.
If your tool logic resides within a class and needs access to instance state (self), use the @rigging.tool_method decorator instead.
Copy
Ask AI
import typing as tfrom typing import Annotatedimport rigging as rgclass UserProfileManager: def __init__(self, api_key: str): self.api_key = api_key self._users = {"123": {"name": "Alice", "email": "[email protected]"}} # Dummy data @rg.tool_method(name="fetch_user_profile") # Optional: override name def get_profile(self, user_id: Annotated[str, "The ID of the user to fetch"]) -> dict[str, t.Any] | str: """Retrieves a user's profile information using their ID.""" # Use self.api_key etc. here profile = self._users.get(user_id) if profile: return profile return f"User with ID {user_id} not found."# Instantiate the classprofile_manager = UserProfileManager(api_key="dummy_key")# Access the tool method *through the instance*user_profile_tool = profile_manager.get_profileprint(user_profile_tool.name)# > fetch_user_profileprint(user_profile_tool.description)# > Retrieves a user's profile information using their ID.
@tool_method correctly handles the self argument, ensuring it’s not included in the schema presented to the language model. Use @tool for static methods if they don’t require self.
Both decorators use Tool.from_callable() internally to wrap your function/method into a Tool object. This object holds the function, its generated schema, name, and description.
You can define your tools as either synchronous or asynchronous functions. Rigging will automatically handle the invocation based on the function’s type. Usually it’s good practice to use async def for all your tool functions, and you can opt into any asynchronous behavior later when you start to scale your pipelines.
Copy
Ask AI
import rigging as rg@rg.toolasync def afetch_data(url: str) -> dict: """Fetches data from a given URL (async)""" async with aiohttp.ClientSession() as session: async with session.get(url) as response: return await response.json()@rg.tooldef fetch_data(url: str) -> dict: """Fetches data from a given URL (sync)""" response = requests.get(url) return response.json()data = await afetch_data("https://api.example.com/data")data = fetch_data("https://api.example.com/data")@rg.prompt(tools=[afetch_data, fetch_data])def explore_api(url: str) -> rg.Chat: "Explore the API at the given URL" ...chat = await explore_api("https://api.example.com/data")
To make tools available during generation, pass them to the ChatPipeline.using() method.
Copy
Ask AI
import rigging as rg# Assume get_weather tool is defined as above# Assume profile_manager.get_profile tool is defined and instantiatedchat = ( await rg.get_generator("openai/gpt-4o-mini") .chat("What's the weather like in Paris, and what's Alice's email (user ID 123)?") .using( get_weather, # Pass the decorated function directly profile_manager.get_profile, # Pass the bound method from the instance mode="auto", # Let Rigging choose the best invocation method (API or native) max_depth=5 # Limit recursive tool calls (e.g., if a tool calls another prompt) ) .run())print(chat.conversation)
You can also pass an entire class instance directly to .using(), and Rigging will automatically discover and include all methods decorated with @tool_method:
Copy
Ask AI
import rigging as rgclass DatabaseTools: def __init__(self, connection_string: str): self.connection_string = connection_string @rg.tool_method def query_users(self, name_filter: str) -> list[dict]: """Query users from the database by name filter.""" # Database query logic here return [{"id": 1, "name": "Alice"}, {"id": 2, "name": "Bob"}] @rg.tool_method def create_user(self, name: str, email: str) -> dict: """Create a new user in the database.""" # User creation logic here return {"id": 3, "name": name, "email": email}# Instantiate the classdb_tools = DatabaseTools("postgresql://localhost/mydb")# Pass the instance directly - Rigging will find both tool methodschat = ( await rg.get_generator("openai/gpt-4o-mini") .chat("Find all users with 'Al' in their name, then create a new user named Charlie with email [email protected]") .using(db_tools) # Pass the entire instance .run())
This is equivalent to passing each tool method individually:
Copy
Ask AI
# These two approaches are equivalent:.using(db_tools) # Automatic discovery.using( db_tools.query_users, db_tools.create_user) # Manual specification
When you pass a class instance to .using(), Rigging uses Python’s inspect module to discover all methods decorated with @tool_method. This automatic discovery only includes methods that have been properly decorated - regular methods without the @tool_method decorator will be ignored.
The mode parameter in .using() controls how Rigging interacts with the language model for tool calls:
auto (Default): Rigging checks if the model provider supports API-level function calling (like OpenAI, Anthropic, Google). If yes, it uses that native, efficient mechanism. If not, it falls back to xml.
api: Forces the use of the provider’s function calling API. Will fail if the provider doesn’t support it.
xml: Rigging injects instructions and an XML schema into the prompt, telling the model how to format its output to request a tool call using specific XML tags. Rigging parses this XML.
json-in-xml: Similar to xml, but the model is instructed to place a JSON object containing the arguments within the XML tags.
json-with-tag: Similar to json, but the JSON object is wrapped in a specific tag (e.g., <tool_call>).
json: The model is instructed to output tool calls as raw JSON anywhere in the message content with name and arguments fields
Generally, auto is recommended as it leverages the most efficient method available.
The max_depth parameter limits how many levels deep tool calls can go. If a tool itself triggers another prompt that uses tools, this prevents infinite loops.
You may want to use a particular tool, or catch a condition inside a tool, and indicate to any pipelines that it should stop going back to the model for more calls. You can do this by raising or returning a rigging.Stop exception with a message to be passed back to the model for context.
Copy
Ask AI
import rigging as rg@rg.tooldef execute_code(code: str) -> str: "Finish a task and report a summary of work" ... if "<flag>" in output: # Stop the model from executing more code return rg.Stop(f"Task finished") # or `raise rg.Stop("Task finished")` return outputchat = ( await rg.get_generator("openai/gpt-4o-mini") .chat("Work to accomplish <work>, then finish it.") .using(finish_task) .run())
Returning the rg.Stop exception instead of raising it is helpful if you don’t want any surrounding code (decorators that wrap the tool function) to catch the exception, alter it, or behave as if a typical exception occurred.
This stop indication won’t completely halt the pipeline, but it will let it continue to any additional parsing mechanics or custom callbacks which follow tool calling.
The Model Context Protocol (MCP) is an open standard for language models to interact with external tools and services. Rigging provides a client to connect to MCP-compliant servers.
Use the rigging.mcp function, specifying the transport method (stdio or sse) and the connection parameters. It returns an async context manager.
Copy
Ask AI
import rigging as rg# Connect to an MCP server launched as a local process that communicates over standard input/output.# (Assuming 'my-mcp-server' is an executable that starts the MCP server)command = "my-mcp-server"args = ["--port", "stdio"] # Example argumentsasync with rg.mcp("stdio", command=command, args=args) as mcp_client: # mcp_client.tools contains the list of tools discovered from the server print(f"Discovered {len(mcp_client.tools)} MCP tools via stdio.") if mcp_client.tools: chat = ( await rg.get_generator("openai/gpt-4o-mini") .chat("Use the MCP tool Y.") # Adjust prompt based on available tools .using(*mcp_client.tools) .run() ) print(chat.conversation) else: print("No MCP tools found.")
The mcp context manager handles the connection, tool discovery, and communication with the MCP server. Inside the async with block, mcp_client.tools provides the list of discovered Tool objects ready to be used with .using().
import rigging as rgasync with rg.mcp("stdio", command="claude", args=["mcp", "serve"]) as mcp_client: print(f"Discovered {len(mcp_client.tools)} tools:") for tool in mcp_client.tools: print(f" |- {tool.name}") chat = ( await rg.get_generator("claude-3-7-sonnet-latest") .chat("Using tools, create a file_writer.py rigging tool that writes to a file.") .using(*mcp_client.tools) .run() ) print(chat.conversation)
In addition to consuming tools from mcp servers, Rigging can expose its own tools as an MCP server. This allows you to write a singular tool implementation with the freedom to use it in other MCP-compliant clients, such as claude code.The rigging.as_mcp function is the primary way to create an MCP server from your tools. It takes a list of Rigging Tool objects (or raw Python functions) and returns a mcp.server.fastmcp.FastMCP server instance, which you can then configure and run.How it works:
You provide a list of tools to as_mcp.
It automatically converts raw functions into rigging.Tool objects.
For each tool, it creates a bridge that handles:
Exposing the tool’s name, description, and parameter schema.
Calling your Python function when a request comes in.
Converting your function’s return value (including rigging.Message and rigging.Stop) into a format the MCP client can understand.
It returns a FastMCP server instance, ready to be run.
Let’s create a server that exposes a file_writer tool.1. Define your tool(s) in a file (e.g., file_writer.py):
Copy
Ask AI
import rigging as rgfrom typing import Annotated@rg.tool()def write_file( filename: Annotated[str, "The name of the file to write to."], content: Annotated[str, "The content to write to the file."],) -> str: """ Writes content to a local file. Creates the file if it doesn't exist, and overwrites it if it does. """ with open(filename, "w") as f: f.write(content) return f"Successfully wrote {len(content)} bytes to {filename}."if __name__ == "__main__": # Create the MCP server instance from a list of tools rg.as_mcp([write_file], name="File IO Tools").run()
Now, this server is ready to be used by any MCP client:
Robopages is a framework for building and hosting tool-enabled “pages” or APIs. Rigging can dynamically fetch the available tools from a running Robopages server and make them available to your language model.Use the rigging.robopages function to connect to a Robopages endpoint and retrieve its tools.
Copy
Ask AI
import rigging as rg# URL of your running Robopages serverROBOPAGES_URL = "http://localhost:8080" # Example URLtry: # Fetch tools from the server robopages_tools = rg.robopages(ROBOPAGES_URL) # Use the fetched tools in a pipeline chat = ( await rg.get_generator("openai/gpt-4o-mini") .chat("Use the available Robopages tool to do X.") # Adjust prompt based on available tools .using(*robopages_tools) # Unpack the list of tools .run() ) print(chat.conversation)except Exception as e: print(f"Failed to connect to Robopages or use tools: {e}") # Handle connection errors or cases where no tools are found
This fetches the tool definitions (name, description, parameters) from the Robopages server. When the language model requests one of these tools, Rigging sends the request back to the Robopages server for execution.
Rigging prepares the tool definitions based on the selected mode.
For api mode, the definitions (ApiToolDefinition) are passed to the model provider’s API.
For xml or json-in-xml modes, Rigging injects the tool schemas and usage instructions into the system prompt.
During generation, if the model decides to call a tool:
In api mode, the provider returns structured tool call information (ApiToolCall).
In native modes, Rigging parses the model’s output for the expected XML format (XmlToolCall or JsonInXmlToolCall).
Rigging validates the arguments provided by the model against the tool’s signature using Pydantic.
Your original Python function/method (or the relevant MCP/Robopages call) is executed with the validated arguments.
The return value is formatted into a message (Message with role="tool for API calls, or a user message containing NativeToolResult for native calls) and sent back to the model.
The generation process continues until the model produces a final response without tool calls or the max_depth is reached.
Tools have built-in error handling that primarily focuses on argument validation and JSON parsing errors that occur during tool execution. Understanding how this works is crucial for building robust tool-enabled pipelines.
By default, tools catch common errors that occur during the tool call process:
json.JSONDecodeError: When the model provides malformed JSON arguments
ValidationError: When the model’s arguments don’t match the tool’s type signature
Copy
Ask AI
import rigging as rg@rg.tool() # Default: catch={json.JSONDecodeError, ValidationError}async def add(a: int, b: int) -> int: """Adds two numbers together.""" return a + b# If the model calls add({"a": 1}) without 'b', a ValidationError occurs# The tool catches it and returns an error message to the modelpipeline = ( rg.get_generator("openai/gpt-4o-mini") .chat("Use the add tool with only argument 'a' set to 1, no 'b'") .using(add))chat = await pipeline.run()# The model receives: "<error>1 validation error for call[add]..."# And can respond acknowledging the error or retry with correct arguments
Key Point: These validation errors are handled at the tool level and become part of the conversation flow. The model receives error feedback and can learn from it, but the pipeline continues running.
You can override the default error handling behavior:
Copy
Ask AI
# Catch all exceptions (including your own function errors)@rg.tool(catch=True)def risky_tool(input: str) -> str: if input == "fail": raise ValueError("Something went wrong!") return "Success"# Catch only specific exceptions@rg.tool(catch={ValueError, TypeError})def selective_tool(input: str) -> str: # Only ValueError and TypeError will be caught and returned as error messages return process_input(input)# Catch no exceptions (let them propagate to pipeline level)@rg.tool(catch=False)def strict_tool(input: str) -> str: # Any errors will stop the pipeline unless caught by pipeline.catch() return process_input(input)
When tools handle errors gracefully, error information is preserved in message metadata:
Copy
Ask AI
chat = await pipeline.run()# Check for tool errors in the conversationfor message in chat.all: if message.role == 'tool' and 'error' in message.metadata: print(f"Tool call failed: {message.metadata['error']}") print(f"Tool response: {message.content}")
Since tools defined with @tool_method operate on class instances, they can maintain and modify state across multiple calls within a single pipeline execution.
Copy
Ask AI
class CounterTool: def __init__(self): self.count = 0 @rg.tool_method def increment(self, amount: int = 1) -> str: """Increments the counter and returns the new value.""" self.count += amount return f"Counter is now {self.count}"# Usagecounter = CounterTool()chat = ( await rg.get_generator(...) .chat("Increment twice") .using(counter.increment) .run())
You could also do the same thing by returning a stateful tool function defined in a closure: