get_generator function. The base interface is flexible, and designed to support optimizations should the underlying mechanisms support it (batching async, K/V cache, etc.)
Identifiers
Much like database connection strings, Rigging generators can be represented as strings which define what provider, model, API key, generation params, etc should be used.Throughout our code, we frequently use these generator identifiers as CLI arguments, environment variables, and API parameters. They are convenient for passing around complex configurations without having to represent model configurations in multiple places. They are also used to serialize generators to storage when chats are stored, so you can save and load them easily without having to reconfigure the generator each time.
providermaps to a particular subclass ofGenerator(optional).modelis a anystrvalue, typically used by the provider to indicate a specific LLM to target.kwargsare used to carry:- API key (
,api_key=...) or the base URL (,api_base=...) for the model provider. - Serialized
GenerateParamsfields like like temp, stop tokens, etc. - Additional provider-specific attributes to set on the constructed generator class. For instance, you
can set the
LiteLLMGenerator.max_connectionsproperty by passing,max_connections=in the identifier string.
- API key (
litellm/LiteLLMGenerator by default.
You can view the LiteLLM docs for more information about supported model providers and parameters.
Building generators from string identifiers is optional, but a convenient way to represent complex LLM configurations.
API Keys
All generators carry a.api_key attribute which can be set directly, or by passing ,api_key= as part of an identifier string. Not all generators will require one, but they are common enough that we include the attribute as part of the base class.
Typically you will be using a library like LiteLLM underneath, and can simply use environment variables:
Rate Limits
Generators that leverage remote services (LiteLLM) expose properties for managing connection/request limits:LiteLLMGenerator.max_connectionsLiteLLMGenerator.min_delay_between_requests
ChatPipeline.wrap() with a library like backoff to catch many, or specific errors, like rate limits or general connection issues.
You’ll find that the exception consistency inside LiteLLM can be quite poor. Different providers throw different types of exceptions for all kinds of status codes, response data, etc. With that said, you can typically find a target list that works well for your use-case.
Local Models
We have experimental support for bothvLLM and transformers generators for loading and running models directly in the same Python process. In general vLLM is more consistent with Rigging’s preferred API, but the dependency requirements are heavier.
Where needed, you can wrap an existing model into a rigging generator by using the VLLMGenerator.from_obj() or TransformersGenerator.from_obj() methods. These are helpful for any picky model construction that might not play well with our rigging constructors.
The use of these local generators requires the
vllm and transformers packages to be installed. You can use rigging[all] to install them all at once, or pick your preferred package individually.Self-Hosted Models
In addition to loading models directly inside the Python process, you often want to access models via some self-hosted server like Ollama or vLLM. Using self-hosted models is well supported in the LiteLLM ecosystem, and usually just requires some consideration for the API base URL and API key. Beyond specific servers, many services expose models in the “openai-compatible” format, which can be used with theopenai/ LiteLLM prefix (usually just openai/<model>,api_base=http://...,api_key=...).
- https://docs.litellm.ai/docs/providers/vllm
- https://docs.litellm.ai/docs/providers/ollama
- https://docs.litellm.ai/docs/providers/openai_compatible
Self-Hosted Ollama
We’ll load theqwen3:0.6b model from Ollama, and the ollama server will host the model on http://localhost:11434 by default.
ollama/ or ollama_chat/ prefixes:
If you are running the Ollama server somewhere besides localhost, just pass the
api_base to the generator:Self-Hosted vLLM Server
vLLM ships with it’s own openai-compatible server using thevllm serve command. LiteLLM uses the hosted_vllm/ prefix to connect there, otherwise you can use the openai/ prefix noted below.
Self-Hosted OpenAI-Compatible Server
For most other self-hosted models, the server will expose OpenAI-compatible endpoints, and you can use theopenai/ prefix for LiteLLM as noted in their docs:
Selecting openai as the provider routes your request to an OpenAI-compatible endpoint using the upstream official OpenAI Python API library.This library requires an API key for all requests, either through the api_key parameter or the OPENAI_API_KEY environment variable.If you don’t want to provide a fake API key in each request, consider using a provider that directly matches your OpenAI-compatible endpoint, such as
hosted_vllm or llamafile.You can also use the
openai/ prefix along with api_key= and api_base= for vLLM:Overload Generation Params
When working with bothCompletionPipeline and ChatPipeline, you can overload and update any generation params by using the associated .with_() function.
HTTP Generator
TheHTTPGenerator allows you to wrap any HTTP endpoint as a generator, making it easy to integrate external LLMs or AI services into your Rigging pipelines. It works by defining a specification that maps message content into HTTP requests and parses responses back into messages.
The specification is assigned to the .spec field on the generator, and can be applied as a Python dictionary, JSON string, YAML string, or base64 encoded JSON/YAML string.
This flexibility allows you to easily share and reuse specifications across different parts of your application.
Simple Factories (Recommended)
For most common use cases, you can use one of the built-in factory methods. These provide a simple, high-level interface and give you full autocompletion for configuration in your IDE.For JSON APIs: HTTPGenerator.for_json_endpoint()
This is the perfect choice for APIs that accept a JSON request body and return a JSON response.
For Text & Template-based APIs: HTTPGenerator.for_text_endpoint()
If you’re interacting with a simpler API that takes plain text, this factory is ideal. The entire request body is a single Jinja2 template string.
Advanced Usage with HTTPSpec
For maximum control, you can bypass the factories and define a full HTTPSpec object yourself. This is useful for complex scenarios involving multi-step transformations or other non-standard requirements.
.model field on the generator to carry our crucible challenge
State Management
You can make anyHTTPGenerator stateful by using the .state dictionary. This is a mutable dictionary that you can use to store any dynamic information—like session IDs or temporary credentials—that needs to be accessed by your templates.
Hooks
For advanced scenarios like handling expiring credentials, you can combine thestate dictionary with an async hook function. The hook is called after every HTTP request, allowing it to inspect the response and dynamically update the state before automatically retrying.
Template Context Variables
When building requests with either the factories or a fullspec, the following RequestTransformContext variables are available in your Jinja templates (e.g., {{ variable }}) and JSON value substitutions (e.g., "$variable"):
content: Content of the last message.messages: List of all message objects in the history.api_key: The API key from the generator’s configuration.model: The model identifier from the generator’s configuration.state: The generator’s mutablestatedictionary (see Stateful Generators).params: Generation parameters from the current call (e.g.,temperature).all_content: Concatenated content of all messages.role: Role of the last message (user/assistant/system).
spec files with multiple transformation steps, the output of the previous step is also available to the next as result, data, or output.
Transforms
The HTTP generator supports different types of transforms for both request building and response parsing. Each serves a specific purpose and has its own pattern syntax. Jinja (request + response) Thejinja transform type provides full Jinja2 template syntax. Access context variables directly
and use Jinja2 filters and control structures.
json transform type lets you build JSON request bodies using a template object. Use $ prefix
to reference context variables, with dot notation for nested access:
jsonpath transform type uses JSONPath expressions to extract data from JSON responses:
regex transform type uses regular expressions to extract content from text responses:
Writing a Generator
All generators should inherit from theGenerator base class, and can elect to implement handlers for messages and/or texts:
async def generate_messages(...)- Used forChatPipeline.runvariants.async def generate_texts(...)- Used forCompletionPipeline.runvariants.
If your generator doesn’t implement a particular method like text completions, Rigging will simply raise a
NotImplementedError for you. It’s currently undecided whether generators should prefer to provide weak overloads for compatibility, or whether they should ignore methods which can’t be used optimally to help provide clarity to the user about capability. You’ll find we’ve opted for the former strategy in our generators.
