Generators
The core of generating messages and text.
Underlying LLMs (or any function which completes text) is represented as a generator in Rigging. They are typically instantiated using identifier strings and the get_generator
function. The base interface is flexible, and designed to support optimizations should the underlying mechanisms support it (batching async, K/V cache, etc.)
Identifiers
Much like database connection strings, Rigging generators can be represented as strings which define what provider, model, API key, generation params, etc should be used. They are formatted as follows:
provider
maps to a particular subclass ofGenerator
.model
is a anystr
value, typically used by the provider to indicate a specific LLM to target.kwargs
are used to carry:- API key (
,api_key=...
) or the base URL (,api_base=...
) for the model provider. - Serialized
GenerateParams
fields like like temp, stop tokens, etc. - Additional provider-specific attributes to set on the constructed generator class. For instance, you
can set the
LiteLLMGenerator.max_connections
property by passing,max_connections=
in the identifier string.
- API key (
The provider is optional and Rigging will fallback to litellm
/LiteLLMGenerator
by default.
You can view the LiteLLM docs for more information about supported model providers and parameters.
Here are some examples of valid identifiers:
Building generators from string identifiers is optional, but a convenient way to represent complex LLM configurations.
Back to Strings
Any generator can be converted back into an identifier using either to_identifier
or get_identifier
.
API Keys
All generators carry a .api_key
attribute which can be set directly, or by passing ,api_key=
as part of an identifier string. Not all generators will require one, but they are common enough that we include the attribute as part of the base class.
Typically you will be using a library like LiteLLM underneath, and can simply use environment variables:
Rate Limits
Generators that leverage remote services (LiteLLM) expose properties for managing connection/request limits:
LiteLLMGenerator.max_connections
LiteLLMGenerator.min_delay_between_requests
However, a more flexible solution is ChatPipeline.wrap()
with a library like backoff to catch many, or specific errors, like rate limits or general connection issues.
You’ll find that the exception consistency inside LiteLLM can be quite poor. Different providers throw different types of exceptions for all kinds of status codes, response data, etc. With that said, you can typically find a target list that works well for your use-case.
Local Models
We have experimental support for both vLLM
and transformers
generators for loading and running local models. In general vLLM is more consistent with Rigging’s preferred API, but the dependency requirements are heavier.
Where needed, you can wrap an existing model into a rigging generator by using the VLLMGenerator.from_obj()
or TransformersGenerator.from_obj()
methods. These are helpful for any picky model construction that might not play well with our rigging constructors.
The use of these local generators requires the vllm
and transformers
packages to be installed. You can use rigging[all]
to install them all at once, or pick your preferred package individually.
See more about them below:
vLLMGenerator
TransformersGenerator
Loading and Unloading
You can use the Generator.load
and Generator.unload
methods to better control memory usage. Local providers typically are lazy and load the model into memory only when first needed.
Overload Generation Params
When working with both CompletionPipeline
and ChatPipeline
, you can overload and update any generation params by using the associated .with_()
function.
HTTP Generator
The HTTPGenerator
allows you to wrap any HTTP endpoint as a generator, making it easy to integrate external LLMs or AI services into your Rigging pipelines. It works by defining a specification that maps message content into HTTP requests and parses responses back into messages.
The specification is assigned to the .spec
field on the generator, and can be applied as a Python dictionary, JSON string, YAML string, or base64 encoded JSON/YAML string.
This flexibility allows you to easily share and reuse specifications across different parts of your application.
1. Were are using the .model
field on the generator to carry our crucible challenge
Saving schemas
Encoded YAML is the default storage when an HTTP generator is serialized to an indentifier using
to_identifier
. This also means that when we save
our chats to storage, they maintain their http specification.
Specification
The specification (HTTPSpec
) controls how messages are transformed around HTTP interactions. It supports:
- Template-based URLs
- Template-based header generation
- Configurable timeouts and HTTP methods
- Status code validation
- Flexible body transformations for both the request and response
When building requests, the following context variables (RequestTransformContext
)
are available in your transform patterns:
role
- Role of the last message (user/assistant/system)content
- Content of the last messageall_content
- Concatenated content of all messagesmessages
- List of all message objectsparams
- Generation parameters (temperature, max_tokens, etc.)api_key
- API key from the generatormodel
- Model identifier from the generator
For both request and response transform chains, the previous result of each transform is
provided to the next transform via any of data
, output
, result
, or body
.
Transforms
The HTTP generator supports different types of transforms for both request building and response parsing. Each serves a specific purpose and has its own pattern syntax.
Transform Chaining
Transforms are applied in sequence, with each transform’s output becoming the input for the next. This allows you to build complex processing pipelines:
Jinja (request + response)
The jinja
transform type provides full Jinja2 template syntax. Access context variables directly
and use Jinja2 filters and control structures.
JSON (request only)
The json
transform type lets you build JSON request bodies using a template object. Use $
prefix
to reference context variables, with dot notation for nested access:
JSONPath (response only)
The jsonpath
transform type uses JSONPath expressions to extract data from JSON responses:
Regex (response only)
The regex
transform type uses regular expressions to extract content from text responses:
Writing a Generator
All generators should inherit from the Generator
base class, and can elect to implement handlers for messages and/or texts:
async def generate_messages(...)
- Used forChatPipeline.run
variants.async def generate_texts(...)
- Used forCompletionPipeline.run
variants.
If your generator doesn’t implement a particular method like text completions, Rigging will simply raise a NotImplementedError
for you. It’s currently undecided whether generators should prefer to provide weak overloads for compatibility, or whether they should ignore methods which can’t be used optimally to help provide clarity to the user about capability. You’ll find we’ve opted for the former strategy in our generators.
Generators operate in a batch context by default, taking in groups of message lists or texts. Whether your implementation takes advantage of this batching is up to you, but where possible you should be optimizing as much as possible.
Generators don’t make any assumptions about the underlying mechanism that completes text. You might use a local model, API endpoint, or static code, etc. The base class is designed to be flexible and support a wide variety of use cases. You’ll find the inclusion of api_key
, model
, and generation params are common enough that they are included in the base class.
Use the register_generator
method to add your generator class under a custom providerid so it can be used with get_generator
.