This guide shows how to use SciPhi+R2R to:

  1. Ingest files into your SciPhi Cloud
  2. Search over the ingested files
  3. Create a RAG (Retrieval-Augmented Generation) response

Be sure to complete the deployment instructions before continuing with this guide.

Set your environment variable

Start by clicking the copy icon next to your deployed pipeline URL ( in the image below) and then exporting this into your local environment variable:


Ingesting file(s)

The following command ingests a default sample file r2r/examples/data/aristotle.txt:

r2r --base-url=$SCIPHI_CLOUD_URL ingest
{'results': {'processed_documents': ["File '.../aristotle.txt' processed successfully."], 'skipped_documents': []}}

To run on your own selected files, execute the following:

r2r --base-url=$SCIPHI_CLOUD_URL ingest /path/to/your_file_1 /path/to/your_file_2 ...

The following command returns the search results from the input query 'who was aristotle?':

# execute with hybrid search
r2r --base-url=$SCIPHI_CLOUD_URL search --query="who was aristotle?" --do-hybrid-search

RAG Response

The following command generates a RAG response to the input query 'who was aristotle?':

r2r --base-url=$SCIPHI_CLOUD_URL rag --query="who was aristotle?" --do-hybrid-search

Stream a RAG Response

The following command streams a RAG response to the input query 'who was aristotle?':

r2r --base-url=$SCIPHI_CLOUD_URL rag --query="who was aristotle?" --stream --do-hybrid-search

Hello SciPhi

Ready to dive in with coding communications to your R2R deployment? See the snippet below:

import os
from r2r import Document, GenerationConfig, R2RClient

# Initialize the R2R client with the SciPhi Cloud URL
client = R2RClient(base_url=os.environ.get("SCIPHI_CLOUD_URL"))

# Ingest a document
document = Document(
    data="John is a person that works at Google.",
ingest_response = client.ingest_documents([document.dict()])
print(f"Ingestion Response: {ingest_response}")

# Perform a RAG query
rag_results = client.rag(
    query="Who is John?",
    rag_generation_config=GenerationConfig(model="gpt-3.5-turbo", temperature=0.0)

print(f"Search Results:\n{rag_results['results']['search_results']}")

# Example output:
# Ingestion Response: {'results': {'processed_documents': ['Document processed successfully.'], 'skipped_documents': []}}
# Search Results:
# {'vector_search_results': [{'id': '...', 'score': 0.9876, 'metadata': {'text': 'John is a person that works at Google.', ...}}], 'kg_search_results': None}
# Completion:
# {'id': '...', 'choices': [{'finish_reason': 'stop', 'index': 0, 'message': {'content': 'John is a person who works at Google.', 'role': 'assistant'}}], ...}