Skip to content

mozilla-ai/any-llm

Project logo

any-llm

Read the Blog Post

Docs

Linting Unit Tests Integration Tests

Python 3.11+ PyPI Discord

Communicate with any LLM provider using a single, unified interface. Switch between OpenAI, Anthropic, Mistral, Ollama, and more without changing your code.

Documentation | Try the Demos | Contributing

Quickstart

pip install 'any-llm-sdk[mistral,ollama]'

export MISTRAL_API_KEY="YOUR_KEY_HERE"  # or OPENAI_API_KEY, etc
from any_llm import completion
import os

# Make sure you have the appropriate environment variable set
assert os.environ.get('MISTRAL_API_KEY')

response = completion(
    model="mistral-small-latest",
    provider="mistral",
    messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)

That's it! Change the provider name and add provider-specific keys to switch between LLM providers.

Installation

Requirements

  • Python 3.11 or newer
  • API keys for whichever LLM providers you want to use

Basic Installation

Install support for specific providers:

pip install 'any-llm-sdk[openai]'           # Just OpenAI
pip install 'any-llm-sdk[mistral,ollama]'   # Multiple providers
pip install 'any-llm-sdk[all]'              # All supported providers

See our list of supported providers to choose which ones you need.

Setting Up API Keys

Set environment variables for your chosen providers:

export OPENAI_API_KEY="your-key-here"
export ANTHROPIC_API_KEY="your-key-here"
export MISTRAL_API_KEY="your-key-here"
# ... etc

Alternatively, pass API keys directly in your code (see Usage examples).

Why choose any-llm?

  • Simple, unified interface - Single function for all providers, switch models with just a string change
  • Developer friendly - Full type hints for better IDE support and clear, actionable error messages
  • Leverages official provider SDKs - Ensures maximum compatibility
  • Stays framework-agnostic so it can be used across different projects and use cases
  • Battle-tested - Powers our own production tools (any-agent)
  • No Proxy or Gateway server required - Direct connections to whichever LLM provider you need.

Usage

any-llm offers two main approaches for interacting with LLM providers:

Option 1: Direct API Functions (Recommended for Bootstrapping and Experimentation)

Recommended approach: Use separate provider and model parameters:

from any_llm import completion
import os

# Make sure you have the appropriate environment variable set
assert os.environ.get('MISTRAL_API_KEY')

response = completion(
    model="mistral-small-latest",
    provider="mistral",
    messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)

Alternative syntax: Use combined provider:model format:

response = completion(
    model="mistral:mistral-small-latest", # <provider_id>:<model_id>
    messages=[{"role": "user", "content": "Hello!"}]
)

Option 2: AnyLLM Class (Recommended for Production)

For applications that need to reuse providers, perform multiple operations, or require more control:

from any_llm import AnyLLM

llm = AnyLLM.create("mistral", api_key="your-mistral-api-key")

response = llm.completion(
    model="mistral-small-latest",
    messages=[{"role": "user", "content": "Hello!"}]
)

When to Use Which Approach

Approach Best For Connection Handling
Direct API Functions (completion) Scripts, notebooks, single requests New client per call (stateless)
AnyLLM Class (AnyLLM.create) Production apps, multiple requests Reuses client (connection pooling)

Both approaches support identical features : streaming, tools, responses API, etc.

Responses API

For providers that implement the OpenAI-style Responses API, use responses or aresponses:

from any_llm import responses

result = responses(
    model="gpt-4o-mini",
    provider="openai",
    input_data=[
        {"role": "user", "content": [
            {"type": "text", "text": "Summarize this in one sentence."}
        ]}
    ],
)

# Non-streaming returns an OpenAI-compatible Responses object alias
print(result.output_text)

Finding the Right Model

The provider_id should match our supported provider names.

The model_id is passed directly to the provider. To find available models:

  • Check the provider's documentation
  • Use our list_models API (if the provider supports it)

Try It

Try any-llm in action with our interactive demos:

πŸ’¬ Chat Demo

πŸ“‚ Run the Chat Demo

An interactive chat interface showcasing streaming completions and provider switching:

  • Real-time streaming responses
  • Easy switchign between multiple LLM providers
  • Collapsible "thinking" content display for supported models
  • Auto-scrolling chat interface

πŸ” Model Finder Demo

πŸ“‚ Run the Model Finder Demo

A model discovery tool featuring:

  • Search and filter models across all your configured providers
  • Provider status dashboard
  • API configuration checker

Motivation

The landscape of LLM provider interfaces is fragmented. While OpenAI's API has become the de facto standard, providers implement slight variations in parameter names, response formats, and feature sets. This creates a need for light wrappers that gracefully handle these differences while maintaining a consistent interface.

Existing Solutions and Their Limitations:

  • LiteLLM: Popular but reimplements provider interfaces rather than leveraging official SDKs, leading to potential compatibility issues.
  • AISuite: Clean, modular approach but lacks active maintenance, comprehensive testing, and modern Python typing standards.
  • Framework-specific solutions: Some agent frameworks either depend on LiteLLM or implement their own provider integrations, creating fragmentation
  • Proxy Only Solutions: solutions like OpenRouter and Portkey require a hosted proxy between your code and the LLM provider.

any-llm addresses these challenges by leveraging official SDKs when available, maintaining framework-agnostic design, and requiring no proxy servers.

Documentation

Contributing

We welcome contributions from developers of all skill levels! Please see our Contributing Guide or open an issue to discuss changes.

License

This project is licensed under the Apache License 2.0 - see the LICENSE file for details.