Quickstart
Getting started with R2R
This guide shows how to use R2R to:
- Ingest files into your Postgres vector database
- Search over ingested files
- Create a RAG (Retrieval-Augmented Generation) response
Be sure to complete the installation instructions before continuing with this guide.
Starting the server
R2R methods can be called directly r2r.method(...)
instead of in a client-server architecture. See the full walkthrough for details on this and more.
You can start the R2R server using the R2R CLI, Python or Docker:
r2r serve --port=8000
Ingesting file(s)
The following command ingests a default sample file r2r/examples/data/aristotle.txt
:
r2r ingest
{'results': {'processed_documents': ["File '.../aristotle.txt' processed successfully."], 'skipped_documents': []}}
To run on your own selected files, execute the following:
r2r ingest /path/to/your_file_1 /path/to/your_file_2 ...
Executing a search
The following command returns the search results from the input query 'who was aristotle?'
:
# execute with hybrid search
r2r search --query="who was aristotle?" --do-hybrid-search
{'results': {'vector_search_results': [
{
'id': '7ed3a01c-88dc-5a58-a68b-6e5d9f292df2',
'score': 0.780314067545999,
'metadata': {
'text': 'Aristotle[A] (Greek: Ἀριστοτέλης Aristotélēs, pronounced [aristotélɛːs]; 384–322 BC) was an Ancient Greek philosopher and polymath. His writings cover a broad range of subjects spanning the natural sciences, philosophy, linguistics, economics, politics, psychology, and the arts. As the founder of the Peripatetic school of philosophy in the Lyceum in Athens, he began the wider Aristotelian tradition that followed, which set the groundwork for the development of modern science.',
'title': 'aristotle.txt',
'version': 'v0',
'chunk_order': 0,
'document_id': 'c9bdbac7-0ea3-5c9e-b590-018bd09b127b',
'extraction_id': '472d6921-b4cd-5514-bf62-90b05c9102cb',
...
RAG Response
The following command generates a RAG response to the input query 'who was aristotle?'
:
r2r rag --query="who was aristotle?" --do-hybrid-search
Search Results:
{'vector_search_results': [
{'id': '7ed3a01c-88dc-5a58-a68b-6e5d9f292df2',
'score': 0.7802911996841491,
'metadata': {'text': 'Aristotle[A] (Greek: Ἀριστοτέλης Aristotélēs, pronounced [aristotélɛːs]; 384–322 BC) was an Ancient Greek philosopher and polymath. His writings cover a broad range of subjects spanning the natural sciences, philosophy, linguistics, economics, politics, psychology, and the arts. As the founder of the Peripatetic schoo
...
Completion:
{'results': [
{
'id': 'chatcmpl-9eXL6sKWlUkP3f6QBnXvEiKkWKBK4',
'choices': [
{
'finish_reason': 'stop',
'index': 0,
'logprobs': None,
'message': {
'content': "Aristotle (384–322 BC) was an Ancient Greek philosopher and polymath whose writings covered a broad range of subjects including the natural sciences,
...
Stream a RAG Response
The following command streams a RAG response to the input query 'who was aristotle?'
:
r2r rag --query="who was aristotle?" --stream --do-hybrid-search
Hello R2R
R2R supports configurable vector search and RAG right out of the box, as the example below shows:
from r2r import Document, GenerationConfig, R2R
app = R2R() # You may pass a custom configuration to `R2R`
app.ingest_documents(
[
Document(
type="txt",
data="John is a person that works at Google.",
metadata={},
)
]
)
rag_results = app.rag(
"Who is john", GenerationConfig(model="gpt-3.5-turbo", temperature=0.0)
)
print(f"Search Results:\n{rag_results.search_results}")
print(f"Completion:\n{rag_results.completion}")
# RAG Results:
# Search Results:
# AggregateSearchResult(vector_search_results=[VectorSearchResult(id=2d71e689-0a0e-5491-a50b-4ecb9494c832, score=0.6848798582029441, metadata={'text': 'John is a person that works at Google.', 'version': 'v0', 'chunk_order': 0, 'document_id': 'ed76b6ee-dd80-5172-9263-919d493b439a', 'extraction_id': '1ba494d7-cb2f-5f0e-9f64-76c31da11381', 'associatedQuery': 'Who is john'})], kg_search_results=None)
# Completion:
# ChatCompletion(id='chatcmpl-9g0HnjGjyWDLADe7E2EvLWa35cMkB', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content='John is a person that works at Google [1].', role='assistant', function_call=None, tool_calls=None))], created=1719797903, model='gpt-3.5-turbo-0125', object='chat.completion', service_tier=None, system_fingerprint=None, usage=CompletionUsage(completion_tokens=11, prompt_tokens=145, total_tokens=156))
Was this page helpful?