Iterating and Batching
Iterating over messages, params, and generators, as well as batching of requests.
Rigging has good support for iterating over messages, params, and generators, as well as large batching of requests. How efficiently these mechanisms operates is dependent on the underlying generator that’s being used, but Rigging has been developed with scale in mind.
Multiple Generations
The run_many
functions let you scale out generation N times with the same inputs:
- [
ChatPipeline.run_many()
][rigging.chat.ChatPipeline.run_many] - [
CompletionPipeline.run_many()
][rigging.completion.CompletionPipeline.run_many] - [
Prompt.run_many()
][rigging.prompt.Prompt.run_many]
Batching Inputs
The run_batch
functions let you batch accross a set of inputs:
ChatPipeline.run_batch()
CompletionPipeline.run_batch()
As processing proceeds with things like .then
or .map
, that chats will resolve individually and collapse into the final results.
“Skipping failed results”
Passing on_failed='skip'
to [.run_batch
][rigging.chat.ChatPipeline.run_batch], or configuring a pipeline with .catch(..., on_failed='skip')
will cause the function to ignore any parsing errors like ExhaustedMaxRoundsError
and only return the chats that were successful.
Batching Parameters
In addition to batching against input messages or strings, you can fix a single input and build a batch accross a set of generation parameters. The inputs to .run_batch
will scale either the generate parameters or the input messages if either is a single item.
Iterating over Models
The run_over
functions let you execute generation over a set of generators:
ChatPipeline.run_over()
CompletionPipeline.run_over()
Prompt.run_over()
Generators can be passed as string identifiers or full instances of Generator
. By default the original generator associated with the ChatPipeline
is included in the iteration, configurable with the include_original
parameter.
Much like the [run_many
][rigging.chat.ChatPipeline.run_many] and [run_batch
][rigging.chat.ChatPipeline.run_batch] functions, you can control the
handling of failures with the on_failed
parameter.