Defining chat interactions as python functions is a great abstraction on top of chat pipelines, allowing you to leverage parsing logic and type hints to define a code-less function that will use a generator/pipeline underneath to produce an output.A Prompt is typically created using one of the following decorators:
@rg.prompt - Optionally takes a generator id, generator, or pipeline.
@generator.prompt - Use this generator when executing the prompt.
@pipeline.prompt - Use this pipeline when executing the prompt.
Prompt functions can be defined with or without the async keyword, but they will always be represented as async calls once wrapped based on their connection to chat pipelines.In other words, wrapping a synchronous function with @rg.prompt will result in an async callable.
Copy
Ask AI
import rigging as rg@rg.prompt(generator_id="mistral/mistral-medium-latest")def summarize(text: str) -> str: """Summarize this text."""
Prompts are optionally bound to a pipeline/generator underneath, hence the generator and pipeline decorator variants, but they don’t have to be. We refer to bound prompts as “standalone”, because they can be executed directly as functions. Otherwise, you are required first “bind” the prompt to a specific generator id, generator, or pipeline to make it callable. Do this with Prompt.bind or related methods.
Copy
Ask AI
import rigging as rg@rg.promptdef summarize(text: str) -> str: """Summarize this text."""generator = rg.get_generator("gpt-4o-mini")await summarize.bind(generator)("...")
Underneath, the function signature will be analyzed for inputs and outputs, and the docstring
will be used to create a final jinja2 template which will be used as the prompt text. You can
always access this .template attribute to inspect how the prompt
will be formatted.Here are the general processing docstring rules:
Any docstring content will always be at the top of the prompt.
If no docstring is provided, a generic one will be substituted.
Any inputs not explicitly defined in the docstring will be appended after the docstring.
Copy
Ask AI
import rigging as rg@rg.promptdef summarize(text: str) -> str: ...print(summarize.template)# Convert the following inputs to outputs (summarize).## {{ text }}## Produce the following output:## <str></str>
In other words, you can define how inputs will be included in the prompt by passing them inside the docstring in jinja2 formats, or let Rigging handle this for you by omitting them.
In the example above, you’ll notice that defining our function to output a str results in the following text be appended to the prompt template:
No docstring
Copy
Ask AI
# Produce the following output:## <str></str>
This is pretty light on context, and we can improve this by updating our signature with a Ctx annotation:
Copy
Ask AI
from typing import Annotatedimport rigging as rgSummary = Annotated[str, rg.Ctx(tag="summary", example="[2-3 sentences]")]@rg.promptdef summarize(text: Annotated[str, rg.Ctx(tag="long-text")]) -> Summary: """Summarize this text."""print(summarize.template)# Summarize this text.## {{ long_text }}## Produce the following output:## <summary>[2-3 sentences]</summary>
We can apply Ctx annotations to any of the inputs and outputs of a prompt. We can override the xml tag, provide an example, and add prefix text.Output processing is optional, and can be omitted by returning a Chat object from the wrapped function. This allows you to do with the generated output as you please.
Copy
Ask AI
import rigging as rg@rg.promptdef summarize(text: str) -> rg.Chat: """Summarize this text."""print(summarize.template)# Summarize this text.## {{ text }}
You can also define more complex outputs by using a rigging model, list, tuple, or dataclass. Not every construction will be supported, and we attempt to pre-validate the output structure to ensure it can be processed correctly.
Copy
Ask AI
import rigging as rgclass User(rg.Model): name: str email: str age: int@rg.promptdef generate_user() -> User: """Generate a fake test user."""print(generate_user.template)# Generate a fake test user.## Produce the following output:## <user># <name/># <email/># <age/># </user>
You can also embed a Chat object inside other objects, which will be excluded from any prompt guidance, but supplied the value when the prompt is executed. This is great for gathering both structured data and the original chat.
Copy
Ask AI
import rigging as rgjoke = Annotated[str, rg.Ctx(tag="joke")]@rg.promptdef tell_joke() -> tuple[joke, rg.Chat]: """Tell a joke."""print(tell_joke.template)# Tell a joke.## Produce the following output:## <joke></joke>
In addition to templates, you can use .render with valid inputs to view the exact prompt as it will be sent to a generator. You can also use this to pass your prompt into pipelines at your discretion.
Copy
Ask AI
from typing import Annotatedimport rigging as rgemail = Annotated[str, rg.Ctx(tag="email")]@rg.promptdef convert_to_email(name: str, top: int = 5) -> list[email]: """Convert this name into the best {{ top }} email addresses."""print(convert_to_email.render("John Doe"))# Convert this name into the best 5 email addresses.## <name>John Doe</name>## Produce the following output for each item:## <email></email>
Prompt objects expose the following methods for execution:
Prompt.run() (Aliased with __call__)
Prompt.run_many()
Prompt.run_over()
(Available if the prompt was supplied/bonded to a pipeline or generator)You can also bind a prompt at runtime with any of the following:
Prompt.bind()
Prompt.bind_many()
Prompt.bind_over()
Everything configured on a pipeline or generator will be used when running the prompt. Watch/Then/Map callbacks, tools, and generate params can all be used to alter the behavior of the prompt.In general, you should consider prompts as producers of user messages, which will be passed to .fork(), then handle the parsing of outputs.
Copy
Ask AI
import rigging as rg@rg.prompt(generator_id="claude-3-sonnet-20240229")def write_code(description: str, language: str = "python") -> str: """Write a single function."""code = await write_code("Calculate the factorial of a number.")
Assistant
Responses are generated using AI and may contain mistakes.