This installation guide is for Full R2R. For solo developers or teams prototyping, we recommend starting with R2R Light.

This guide will walk you through installing and running R2R using Docker, which is the quickest and easiest way to get started.

Prerequisites

Install the R2R CLI & Python SDK

First, install the R2R CLI and Python SDK:

$pip install r2r
We are actively developing a distinct CLI binary for R2R for easier installation. Please reach out if you have any specific needs or feature requests.

Start R2R with Docker

The full R2R installation does not use the default r2r.toml, instead it provides overrides through a pre-built custom configuration, full.toml.

To start R2R with OpenAI as the default LLM inference and embedding provider:

$# Set cloud LLM settings
>export OPENAI_API_KEY=sk-...
>
>r2r serve --docker --full

Refer here for more information on how to configure various LLM providers.

To start R2R with your local computer as the default LLM inference provider:

$r2r serve --docker --full --config-name=full_local_llm

Then, in a separate terminal you will need to run Ollama to provide completions:

$ ollama pull llama3.1
> ollama pull mxbai-embed-large
> ollama serve

The code above assumes that Ollama has already been installed. If you have not yet done so, then refer to the official Ollama webpage for installation instructions. For more information on local installation, refer here.

R2R offers flexibility in selecting and configuring LLMs, allowing you to optimize your RAG pipeline for various use cases. Execute the command below run deploy R2R with your own custom configuration:

$r2r serve --config-path=/abs/path/to/my_r2r.toml

Learn in detail how to configure your deployment here.

The above command will automatically pull the necessary Docker images and start all the required containers, including R2R, Hatchet, and Postgres+pgvector. The required additional services come bundled into the full R2R Docker Compose by default.

The end result is a live server at http://localhost:7272 serving the R2R API.

In addition to launching a RESTful API, the R2R Docker also launches a applications at localhost:7273 and localhost:7274, which you can read more about here.

Stopping R2R

Safely stop your system by running r2r docker-down to avoid potential shutdown complications.

Next Steps

After successfully installing R2R:

  1. Verify Installation: Ensure all components are running correctly by accessing the R2R API at http://localhost:7272/v3/health.

  2. Quick Start: Follow our R2R Quickstart Guide to set up your first RAG application.

  3. In-Depth Tutorial: For a more comprehensive understanding, work through our R2R Walkthrough.

  4. Customize Your Setup: Configuration your R2R system.

If you encounter any issues during installation or setup, please use our Discord community or GitHub repository to seek assistance.

Built with