Language Model Interface (LMI)

GitHub PyPI version tests License PyPI Python Versions

A Python library for interacting with Large Language Models (LLMs) through a unified interface, hence the name Language Model Interface (LMI).

Installation

pip install fhlmi

Table of Contents


Quick start

A simple example of how to use the library with default settings is shown below.

or, if you only have one user message, just:

Documentation

LLMs

An LLM is a class that inherits from LLMModel and implements the following methods:

  • async acompletion(messages: list[Message], **kwargs) -> list[LLMResult]

  • async acompletion_iter(messages: list[Message], **kwargs) -> AsyncIterator[LLMResult]

These methods are used by the base class LLMModel to implement the LLM interface. Because LLMModel is an abstract class, it doesn't depend on any specific LLM provider. All the connection with the provider is done in the subclasses using acompletion and acompletion_iter as interfaces.

Because these are the only methods that communicate with the chosen LLM provider, we use an abstraction LLMResult to hold the results of the LLM call.

LLMModel

An LLMModel implements call, which receives a list of aviary Messages and returns a list of LLMResults. LLMModel.call can receive callbacks, tools, and output schemas to control its behavior, as better explained below. Because we support interacting with the LLMs using Message objects, we can use the modalities available in aviary, which currently include text and images. lmi supports these modalities but does not support other modalities yet. Adittionally, LLMModel.call_single can be used to return a single LLMResult completion.

LiteLLMModel

LiteLLMModel wraps LiteLLM API usage within our LLMModel interface. It receives a name parameter, which is the name of the model to use and a config parameter, which is a dictionary of configuration options for the model following the LiteLLM configuration schema. Common parameters such as temperature, max_token, and n (the number of completions to return) can be passed as part of the config dictionary.

config can also be used to pass common parameters directly for the model.

Cost tracking

Cost tracking is supported in two different ways:

  1. Calls to the LLM return the token usage for each call in LLMResult.prompt_count and LLMResult.completion_count. Additionally, LLMResult.cost can be used to get a cost estimate for the call in USD.

  2. A global cost tracker is maintained in GLOBAL_COST_TRACKER and can be enabled or disabled using enable_cost_tracking() and cost_tracking_ctx().

Rate limiting

Rate limiting helps regulate the usage of resources to various services and LLMs. The rate limiter supports both in-memory and Redis-based storage for cross-process rate limiting. Currently, lmi take into account the tokens used (Tokens per Minute (TPM)) and the requests handled (Requests per Minute (RPM)).

Basic Usage

Rate limits can be configured in two ways:

  1. Through the LLM configuration:

    With rate_limit we rate limit only token consumption, and with request_limit we rate limit only request volume. You can configure both of them or only one of them as you need.

  2. Through the global rate limiter configuration:

    With client we rate limit only token consumption, and with client|request we rate limit only request volume. You can configure both of them or only one of them as you need.

Rate Limit Format

Rate limits can be specified in two formats:

  1. As a string: "<count> [per|/] [n (optional)] <second|minute|hour|day|month|year>"

  2. Using RateLimitItem classes:

Storage Options

The rate limiter supports two storage backends:

  1. In-memory storage (default when Redis is not configured):

  2. Redis storage (for cross-process rate limiting):

    This limiter can be used in within the LLMModel.check_rate_limit method to check the rate limit before making a request, similarly to how it is done in the LiteLLMModel class.

Monitoring Rate Limits

You can monitor current rate limit status:

Timeout Configuration

The default timeout for rate limiting is 60 seconds, but can be configured:

Weight-based Rate Limiting

Rate limits can account for different weights (e.g., token counts for LLM requests):

Tool calling

LMI supports function calling through tools, which are functions that the LLM can invoke. Tools are passed to LLMModel.call or LLMModel.call_single as a list of Tool objects from aviary, along with an optional tool_choice parameter that controls how the LLM uses these tools.

The tool_choice parameter follows OpenAI's definition. It can be:

Tool Choice Value
Constant
Behavior

"none"

LLMModel.NO_TOOL_CHOICE

The model will not call any tools and instead generates a message

"auto"

LLMModel.MODEL_CHOOSES_TOOL

The model can choose between generating a message or calling one or more tools

"required"

LLMModel.TOOL_CHOICE_REQUIRED

The model must call one or more tools

A specific aviary.Tool object

N/A

The model must call this specific tool

None

LLMModel.UNSPECIFIED_TOOL_CHOICE

No tool choice preference is provided to the LLM API

When tools are provided, the LLM's response will be wrapped in a ToolRequestMessage instead of a regular Message. The key differences are:

  • Message represents a basic chat message with a role (system/user/assistant) and content

  • ToolRequestMessage extends Message to include tool_calls, which contains a list of ToolCall objects, which contains the tools the LLM chose to invoke and their arguments

Further details about how to define a tool, use the ToolRequestMessage and the ToolCall objects can be found in the Aviary documentation.

Here is a minimal example usage:

Vertex

Vertex requires a bit of extra set-up. First, install the extra dependency for auth:

and then you need to configure which region/project you're using for the model calls. Make sure you're authed for that region/project. Typically that means running:

Then you can use vertex models:

Embedding models

This client also includes embedding models. An embedding model is a class that inherits from EmbeddingModel and implements the embed_documents method, which receives a list of strings and returns a list with a list of floats (the embeddings) for each string.

Currently, the following embedding models are supported:

  • LiteLLMEmbeddingModel

  • SparseEmbeddingModel

  • SentenceTransformerEmbeddingModel

  • HybridEmbeddingModel

LiteLLMEmbeddingModel

LiteLLMEmbeddingModel provides a wrapper around LiteLLM's embedding functionality. It supports various embedding models through the LiteLLM interface, with automatic dimension inference and token limit handling. It defaults to text-embedding-3-small and can be configured with name and config parameters. Notice that LiteLLMEmbeddingModel can also be rate limited.

HybridEmbeddingModel

HybridEmbeddingModel combines multiple embedding models by concatenating their outputs. It is typically used to combine a dense embedding model (like LiteLLMEmbeddingModel) with a sparse embedding model for improved performance. The model can be created in two ways:

The resulting embedding dimension will be the sum of the dimensions of all component models. For example, if you combine a 1536-dimensional dense embedding with a 256-dimensional sparse embedding, the final embedding will be 1792-dimensional.

SentenceTransformerEmbeddingModel

You can also use sentence-transformer, which is a local embedding library with support for HuggingFace models, by installing lmi[local].

Last updated