LLMs
Learn how to configure and switch between different Language Model providers in R2R
R2R supports multiple Language Model (LLM) providers, allowing users to easily switch between different models based on their requirements. This guide explains how to configure and use various LLM providers within the R2R framework.
Supported Providers
R2R currently supports the following LLM providers:
Configuring LLM Providers
To switch between LLM providers, update the completions
section in your config.json
file:
"completions": {
"provider": "litellm"
}
Change the provider
value to "openai"
or "litellm"
to switch providers.
Provider Details
LiteLLM Provider (Default)
The LiteLLM
class is the default implementation that integrates with various LLM providers through the LiteLLM library.
Key features:
- Supports multiple LLM providers
- Allows switching between LLMs by setting environment variables
- Provides automatic prompt template translation for certain providers
- Supports registering custom prompt templates
To use LiteLLM with a specific provider, set the appropriate environment variables:
For detailed setup instructions for each provider, refer to the LiteLLM documentation.
OpenAI Provider
The OpenAILLM
class integrates directly with the OpenAI API.
Key features:
- Initializes the OpenAI client using the provided API key
- Supports both non-streaming and streaming completions
- Allows passing additional arguments to the OpenAI API
To use the OpenAI provider:
export OPENAI_API_KEY=your_openai_api_key
Configuring Remote LLM Providers
To configure a specific LLM provider:
- Set the appropriate environment variables as described above.
- Update the
completions
section in yourconfig.json
file.
Example configuration for OpenAI:
"completions": {
"provider": "openai",
"model": "gpt-4"
}
Make sure to include any additional provider-specific settings in your configuration file.
Conclusion
By following this guide, you can easily configure and switch between different LLM providers in R2R. This flexibility allows you to choose the best LLM for your specific use case while maintaining a consistent interface within your application.
For more information on customizing R2R, refer to the Customizing R2R documentation.
Was this page helpful?