Tools
Rigging supports the concept of tools through 2 implementations: ‘API’ Tools and ‘Native’ Tools.
Rigging supports the concept of tools through 2 implementations:
- ‘API’ Tools: These are API-level tool definitions which require a support from a model provder.
- ‘Native’ Tools: These are internally defined, parsed, and handled by Rigging (the original implementation).
In most cases, users should opt for API tools with better provider integrations and performance. Regardless of tool type, the ChatPipeline.using()
method should be used to register tools for use during generation.
API Tools
API tools are defined as standard callables (async supported) and get wrapped in the rg.ApiTool
class before being used during generation.
We use Pydantic to introspect the callable and extract schema information from the signature with some great benefits:
- API-compatible schema information from any function
- Robust argument validation for incoming inference data
- Flexible type handling for BaseModels, Fields, TypedDicts, and Dataclasses
Just after the tool is converted, we take the function schema and add it to the GenerateParams.tools
inside the ChatPipeline
.
Internally, we leverage ChatPipeline.then()
to handle responses from the model and attempt to resolve tool calls before starting another generation loop. This means that when you pass the tool function into your chat pipeline will define it’s order amongst other callbacks like .then()
and [.map()
]
Native Tools
Much like models, native tools inherit from a base rg.Tool
class. These subclasses are required to provide at least 1 function along with a name and description property to present to the LLM during generation. Every function you define and the parameters within are required to carry both type hints and annotations that describe their function.
Integrating native tools into the generation process is as easy as passing an instantiation of your tool class to the ChatPipeline.using()
method.
- The use of
force=True
here is optional, but results in the internal generation ensuring at least one tool is called before the generation completes.
If/when the LLM elects to emit a valid tool call in Riggings format, it will side-step, process the arguments, ensure they conform to your function spec, and execute the desired function. Results will be injected back into the chat and the final message which does not include any tool calls will trigger the end of the generation process.
“Tool State”
It’s worth noting that tools are passed as instantiated classes into Rigging, which means your tool is free to carry state about it’s operations as time progresses. Whether this is a good software design decision is up to you.
Under the Hood
If you are curious what is occuring “under the hood” (as you should), you can print the entire conversation text and see our injected system prompt of instructions for using a tool, along with the auto-generated XML description of the WeatherTool
we supplied to the model
You can use any of the available tools by responding in the call format above. The XML will be parsed and the tool(s) will be executed with the parameters you provided. The results of each tool call will be provided back to you before you continue the conversation. You can execute multiple tool calls by continuing to respond in the format above until you are finished. Function calls take explicit values and are independent of each other. Tool calls cannot share, re-use, and transfer values between eachother. The use of placeholders is forbidden.
The user will not see the results of your tool calls, only the final message of your conversation. Wait to perform your full response until after you have used any required tools. If you intend to use a tool, please do so before you continue the conversation.
Every tool assigned to the ChatPipeline
will be processed by calling .get_description()
and a minimal tool-use prompt will be injected as, or appended to, the system message.