Chats and Messages are how Rigging represents the conversation with a model.
Chat objects hold a sequence of Message objects pre and post generation. This is the most common way that we interact with LLMs, and the interface of both these and ChatPipeline’s are very flexible objects that let you tune the generation process, gather structured outputs, validate parsing, perform text replacements, serialize and deserialize, fork conversations, etc.
You can access and manipulate the structured content parts directly:
Copy
Ask AI
import rigging as rg# Create a message with multiple content partsmessage = rg.Message( role="user", content=[ "Here's an image with a question:", rg.ContentImageUrl.from_file("chart.png"), "What trend do you see in this chart?" ])# Access all content partsfor part in message.content_parts: print(f"Part type: {part.type}")# Get just the text content (concatenated)print(message.content)# "Here's an image with a question: What trend do you see in this chart?"# Add new content partmessage.content_parts.append(rg.ContentText(text="And is it significant?"))
You can use both ChatPipeline.apply()and ChatPipeline.apply_to_all() to swap values prefixed with $ characters inside message contents for fast templating support. This functionality uses string.Template.safe_substitute underneath.
Copy
Ask AI
import rigging as rgtemplate = ( rg.get_generator("gpt-4") .chat("What is the capitol of $country?"))for country in ["France", "Germany"]: chat = await template.apply(country=country).run() print(chat.last)# The capital of France is Paris.# The capital of Germany is Berlin.
Messages can contain tool calls, which represent function calls made by the model. Rigging has support for handling tool calls for you, which you can find over here.
When using API-provided function calling, you can access the tool_calls property of the message:
Copy
Ask AI
import rigging as rgtool_definition = { "type": "function", "function": { "name": "http_get", "description": "Retrieve the contents of a URL", "parameters": { "type": "object", "properties": { "url": { "type": "string", "description": "The URL to retrieve", } }, "required": ["url"], }, }}chat = ( await rg.get_generator("gpt-4.1") .chat("Please summarize dreadnode.io") .with_(tools=[tool_definition]) .run())for tool_call in chat.last.tool_calls: print(f"ID: {tool_call.id}") print(f"Tool: {tool_call.name}") print(f"Arguments: {tool_call.function.arguments}")
Message objects hold all of their parsed ParsedMessagePart’s inside their .parts property. These parts maintain both the instance of the parsed Rigging model object and a .slice_ property that defines exactly where in the message content they are located.
Every time parsing occurs, these parts are re-synced by using .to_pretty_xml() on the model, and stitching the clean content back into the message, fixing any other slices which might have been affected by the operation, and ordering the .parts property based on where they occur in the message content.
Copy
Ask AI
import rigging as rgfrom pydantic import StringConstraintsfrom typing import Annotatedstr_strip = Annotated[str, StringConstraints(strip_whitespace=True)]class Summary(rg.Model): content: str_stripmessage = rg.Message( "assistant", "Sure, the summary is: <summary > Rigging is a very powerful library </summary>. I hope that helps!")message.parse(Summary)print(message.content) # (1)!# Sure, the summary is: <summary>Rigging is a very powerful library</summary>. I hope that helps!print(message.parts)# [# ParsedMessagePart(model=Summary(content='Rigging is a very powerful library'), slice_=slice(22, 75, None))# ]print(message.content[message.parts[0].slice_])# <summary>Rigging is a very powerful library</summary>
1. Notice how our message content got updated to reflect fixing the the extra whitespace in our start tag and our string stripping annotation.
Because we track exactly where a parsed model is inside a message, we can cleanly remove just that portion from the content and re-sync the other parts to align with the new content. This is helpful for removing context from a conversation that you might not want there for future generations. This is a very powerful primitive, that allows you to operate on messages more like a collection of structured models than raw text.
Copy
Ask AI
import rigging as rgclass Reasoning(rg.Model): content: strmeaning = ( await rg.get_generator("claude-2.1") .chat( "What is the meaning of life in one sentence? " f"Document your reasoning between {Reasoning.xml_tags()} tags.", ) .run())# gracefully handle missing modelsreasoning = meaning.last.try_parse(Reasoning)if reasoning: print("Reasoning:", reasoning.content)# Strip parsed content to avoid sharing# previous thoughts with the model.without_reasons = meaning.strip(Reasoning)print("Meaning of life:", without_reasons.last.content)follow_up = await without_reasons.continue_(...).run()
Both Chats and ChatPipelines support the concept of arbitrary metadata that you can use to
store things like tags, metrics, and supporting data for storage, sorting, and filtering.
ChatPipeline.meta() adds to ChatPipeline.metadata
Chat.meta() adds to Chat.metadata
Metadata will carry forward from a ChatPipeline to a Chat object when generation completes. This
metadata is also maintained in the serialization process.
Chats maintain some additional data to understand more about the generation process:
Chat.stop_reason
Chat.usage
Chat.extra
It’s the responsibility of the generator to populate these fields, and their content will vary dependent on the underlying implementation. For instance, the transformers generator doesn’t provide any usage information and the vllm generator will add metrics information to the extra field.
We intentionally keep these fields as generic as possible to allow for future expansion. You’ll often find deep information about the generation process in the Chat.extra field.
Copy
Ask AI
import rigging as rgpipeline = ( rg.get_generator("gpt-4") .chat("What is the 4th form of water?"))chat = await pipeline.with_(stop=["water"]).run()print(chat.last.content) # "The fourth form of"print(chat.stop_reason) # stopprint(chat.usage) # input_tokens=17 output_tokens=5 total_tokens=22print(chat.extra) # {'response_id': 'chatcmpl-9UgcwYrdaVrqUXoNrMGvgxGQqS04V'}chat = await pipeline.with_(stop=[], max_tokens=10).run()print(chat.last.content) # "The fourth form of water is often referred to as"print(chat.stop_reason) # length