Communicate with any LLM provider using a single, unified interface. Switch between OpenAI, Anthropic, Mistral, Ollama, and more without changing your code.
pip install 'any-llm-sdk[mistral,ollama]'
export MISTRAL_API_KEY="YOUR_KEY_HERE" # or OPENAI_API_KEY, etc
from any_llm import completion
import os
# Make sure you have the appropriate environment variable set
assert os.environ.get('MISTRAL_API_KEY')
response = completion(
model="mistral-small-latest",
provider="mistral",
messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)That's it! Change the provider name and add provider-specific keys to switch between LLM providers.
- Python 3.11 or newer
- API keys for whichever LLM providers you want to use
Install support for specific providers:
pip install 'any-llm-sdk[openai]' # Just OpenAI
pip install 'any-llm-sdk[mistral,ollama]' # Multiple providers
pip install 'any-llm-sdk[all]' # All supported providersSee our list of supported providers to choose which ones you need.
Set environment variables for your chosen providers:
export OPENAI_API_KEY="your-key-here"
export ANTHROPIC_API_KEY="your-key-here"
export MISTRAL_API_KEY="your-key-here"
# ... etcAlternatively, pass API keys directly in your code (see Usage examples).
- Simple, unified interface - Single function for all providers, switch models with just a string change
- Developer friendly - Full type hints for better IDE support and clear, actionable error messages
- Leverages official provider SDKs - Ensures maximum compatibility
- Stays framework-agnostic so it can be used across different projects and use cases
- Battle-tested - Powers our own production tools (any-agent)
- No Proxy or Gateway server required - Direct connections to whichever LLM provider you need.
any-llm offers two main approaches for interacting with LLM providers:
Recommended approach: Use separate provider and model parameters:
from any_llm import completion
import os
# Make sure you have the appropriate environment variable set
assert os.environ.get('MISTRAL_API_KEY')
response = completion(
model="mistral-small-latest",
provider="mistral",
messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)Alternative syntax: Use combined provider:model format:
response = completion(
model="mistral:mistral-small-latest", # <provider_id>:<model_id>
messages=[{"role": "user", "content": "Hello!"}]
)For applications that need to reuse providers, perform multiple operations, or require more control:
from any_llm import AnyLLM
llm = AnyLLM.create("mistral", api_key="your-mistral-api-key")
response = llm.completion(
model="mistral-small-latest",
messages=[{"role": "user", "content": "Hello!"}]
)| Approach | Best For | Connection Handling |
|---|---|---|
Direct API Functions (completion) |
Scripts, notebooks, single requests | New client per call (stateless) |
AnyLLM Class (AnyLLM.create) |
Production apps, multiple requests | Reuses client (connection pooling) |
Both approaches support identical features : streaming, tools, responses API, etc.
For providers that implement the OpenAI-style Responses API, use responses or aresponses:
from any_llm import responses
result = responses(
model="gpt-4o-mini",
provider="openai",
input_data=[
{"role": "user", "content": [
{"type": "text", "text": "Summarize this in one sentence."}
]}
],
)
# Non-streaming returns an OpenAI-compatible Responses object alias
print(result.output_text)The provider_id should match our supported provider names.
The model_id is passed directly to the provider. To find available models:
- Check the provider's documentation
- Use our
list_modelsAPI (if the provider supports it)
Try any-llm in action with our interactive demos:
An interactive chat interface showcasing streaming completions and provider switching:
- Real-time streaming responses
- Easy switchign between multiple LLM providers
- Collapsible "thinking" content display for supported models
- Auto-scrolling chat interface
π Run the Model Finder Demo
A model discovery tool featuring:
- Search and filter models across all your configured providers
- Provider status dashboard
- API configuration checker
The landscape of LLM provider interfaces is fragmented. While OpenAI's API has become the de facto standard, providers implement slight variations in parameter names, response formats, and feature sets. This creates a need for light wrappers that gracefully handle these differences while maintaining a consistent interface.
Existing Solutions and Their Limitations:
- LiteLLM: Popular but reimplements provider interfaces rather than leveraging official SDKs, leading to potential compatibility issues.
- AISuite: Clean, modular approach but lacks active maintenance, comprehensive testing, and modern Python typing standards.
- Framework-specific solutions: Some agent frameworks either depend on LiteLLM or implement their own provider integrations, creating fragmentation
- Proxy Only Solutions: solutions like OpenRouter and Portkey require a hosted proxy between your code and the LLM provider.
any-llm addresses these challenges by leveraging official SDKs when available, maintaining framework-agnostic design, and requiring no proxy servers.
- Full Documentation - Complete guides and API reference
- Supported Providers - List of all supported LLM providers
- Cookbook Examples - In-depth usage examples
We welcome contributions from developers of all skill levels! Please see our Contributing Guide or open an issue to discuss changes.
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.