Iterating over messages, params, and generators, as well as batching of requests.
run_many
functions let you scale out generation N times with the same inputs:
ChatPipeline.run_many()
CompletionPipeline.run_many()
Prompt.run_many()
run_batch
functions let you batch across a set of inputs:
ChatPipeline.run_batch()
CompletionPipeline.run_batch()
.then()
or .map()
, the chats will resolve individually and collapse into the final results.
on_failed='skip'
to .run_batch
, or configuring a pipeline with .catch(..., on_failed='skip')
will cause the function to ignore any parsing errors like ExhaustedMaxRoundsError
and only return successful chats..run_batch
will scale either the generate parameters or the input messages if either is a single item.
run_over
functions let you execute generation over a set of generators:
ChatPipeline.run_over()
CompletionPipeline.run_over()
Prompt.run_over()
Generator
. By default the original generator associated with the ChatPipeline
is included in the iteration, configurable with the include_original
parameter.
Much like the run_many
and run_batch
functions, you can control the handling of failures with the on_failed
parameter.