LLMs
Configuring LLMs
R2R uses Large Language Models (LLMs) as the core reasoning engine for RAG operations, providing sophisticated text generation and analysis capabilities.
R2R uses LiteLLM as to route LLM requests because of their provider flexibility. Read more about LiteLLM here.
LLM Configuration
The LLM system can be customized through the completion
section in your r2r.toml
file. Learn more about working with R2R config files.
For more detailed information on configuring other search and RAG settings at runtime, please refer to the RAG Configuration documentation.
Environment Variables
Provider dependent environment variables must be set. These may include:
R2R
The R2R LLM provider offers a robust gateway for OpenAI, Anthropic, and Azure Foundry models.
LiteLLM
LiteLLM offers a Python SDK to call 100+ LLM APIs in OpenAI format
OpenAI
The OpenAI LLM provider makes direct use of the OpenAI Python SDK.