Create dynamic functions out of thin air.
Prompt
is typically created using one of the following decorators:
@rg.prompt
- Optionally takes a generator id, generator, or pipeline.@generator.prompt
- Use this generator when executing the prompt.@pipeline.prompt
- Use this pipeline when executing the prompt.async
keyword, but they will always be represented as async calls once wrapped based on their connection to chat pipelines.In other words, wrapping a synchronous function with @rg.prompt
will result in an async callable.Prompt.bind
or related methods.
.template
attribute to inspect how the prompt
will be formatted.
Here are the general processing docstring rules:
str
results in the following text be appended to the prompt template:
Ctx
annotation:
Ctx
annotations to any of the inputs and outputs of a prompt. We can override the xml tag, provide an example, and add prefix text.
Output processing is optional, and can be omitted by returning a Chat
object from the wrapped function. This allows you to do with the generated output as you please.
Chat
object inside other objects, which will be excluded from any prompt guidance, but supplied the value when the prompt is executed. This is great for gathering both structured data and the original chat.
.render
with valid inputs to view the exact prompt as it will be sent to a generator. You can also use this to pass your prompt into pipelines at your discretion.
Prompt.run()
(Aliased with __call__
)Prompt.run_many()
Prompt.run_over()
Prompt.bind()
Prompt.bind_many()
Prompt.bind_over()
.fork()
, then handle the parsing of outputs.