Ingestion

Learn how to ingest, update, and delete documents with R2R

Introduction

R2R provides a powerful and flexible ingestion pipeline to process and manage various types of documents. It supports a wide range of file formats—text, documents, PDFs, images, audio, and even video—and transforms them into searchable, analyzable content. The ingestion process includes parsing, chunking, embedding, and optionally extracting entities and relationships for knowledge graph construction.

This cookbook will guide you through:

  • Ingesting files, raw text, or pre-processed chunks
  • Choosing an ingestion mode (fast, hi-res, or custom)
  • Updating and deleting documents and chunks

For more on configuring ingestion, see the Ingestion Configuration Overview and Parsing & Chunking.

Ingestion Modes

R2R offers three primary ingestion modes to tailor the process to your requirements:

  • fast:
    A speed-oriented ingestion mode that prioritizes rapid processing with minimal enrichment. Summaries and some advanced parsing are skipped, making this ideal for quickly processing large volumes of documents.

  • hi-res:
    A comprehensive, high-quality ingestion mode that may leverage multimodal foundation models (visual language models) for parsing complex documents and PDFs, even integrating image-based content.

    • On a lite deployment, R2R uses its built-in (r2r) parser.
    • On a full deployment, it can use unstructured_local or unstructured_api for more robust parsing and advanced features.
      Choose hi-res mode if you need the highest quality extraction, including image-to-text analysis and richer semantic segmentation.
  • custom:
    For advanced users who require fine-grained control. In custom mode, you provide a full ingestion_config dict or object to specify every detail: parser options, chunking strategy, character limits, and more.

Example Usage:

1file_path = 'path/to/file.txt'
2metadata = {'key1': 'value1'}
3
4# hi-res mode for thorough extraction
5ingest_response = client.documents.create(
6 file_path=file_path,
7 metadata=metadata,
8 ingestion_mode="hi-res"
9)
10
11# fast mode for quick processing
12ingest_response = client.documents.create(
13 file_path=file_path,
14 ingestion_mode="fast"
15)
16
17# custom mode for full control
18ingest_response = client.documents.create(
19 file_path=file_path,
20 ingestion_mode="custom",
21 ingestion_config={
22 "provider": "unstructured_local",
23 "strategy": "auto",
24 "chunking_strategy": "by_title",
25 "new_after_n_chars": 256,
26 "max_characters": 512,
27 "combine_under_n_chars": 64,
28 "overlap": 100,
29 }
30)

Ingesting Documents

A Document represents ingested content in R2R. When you ingest a file, text, or chunks:

  1. The file (or text) is parsed into text.
  2. Text is chunked into manageable units.
  3. Embeddings are generated for semantic search.
  4. Content is stored for retrieval and optionally linked to the knowledge graph.

In a full R2R installation, ingestion is asynchronous. You can monitor ingestion status and confirm when documents are ready:

$r2r documents list
>
># Example response
>{
> 'id': '9fbe403b-c11c-5aae-8ade-ef22980c3ad1',
> 'title': 'file.txt',
> 'user_id': '2acb499e-8428-543b-bd85-0d9098718220',
> 'type': 'txt',
> 'created_at': '2024-09-05T18:20:47.921933Z',
> 'updated_at': '2024-09-05T18:20:47.921938Z',
> 'ingestion_status': 'success',
> 'restructuring_status': 'pending',
> 'version': 'v0',
> 'summary': 'The document contains a ....', # AI generated summary
> 'collection_ids': [],
> 'metadata': {'version': 'v0'}
>}

An ingestion_status of "success" confirms the document is fully ingested. You can also check the R2R dashboard at http://localhost:7273 for ingestion progress and status.

For more details on creating documents, refer to the Create Document API.

Ingesting Pre-Processed Chunks

If you have pre-processed chunks from your own pipeline, you can directly ingest them. This is especially useful if you’ve already divided content into logical segments.

1chunks = ["This is my first parsed chunk", "This is my second parsed chunk"]
2ingest_response = client.documents.create(
3 chunks=chunks,
4 ingestion_mode="fast" # use fast for a quick chunk ingestion
5)
6print(ingest_response)
7# {'results': [{'message': 'Document created and ingested successfully.', 'document_id': '7a0dad00-b041-544e-8028-bc9631a0a527'}]}

For more on ingesting chunks, see the Create Chunks API.

Deleting Documents and Chunks

To remove documents or chunks, call their respective delete methods:

1# Delete a document
2delete_response = client.documents.delete(document_id)
3
4# Delete a chunk
5delete_response = client.chunks.delete(chunk_id)

You can also delete documents by specifying filters using the by-filter route.

Additional Configuration & Concepts

  • Light vs. Full Deployments:

    • Light (default) uses R2R’s built-in parser and supports synchronous ingestion.
    • Full deployments orchestrate ingestion tasks asynchronously and integrate with more complex providers like unstructured_local.
  • Provider Configuration:
    Settings in r2r.toml or at runtime (ingestion_config) can adjust parsing and chunking strategies:

    • fast and hi-res modes are influenced by strategies like "auto" or "hi_res" in the unstructured provider.
    • custom mode allows you to override chunk size, overlap, excluded parsers, and more at runtime.

For detailed configuration options, see:

Conclusion

R2R’s ingestion pipeline is flexible and efficient, allowing you to tailor ingestion to your needs:

  • Use fast for quick processing.
  • Use hi-res for high-quality, multimodal analysis.
  • Use custom for advanced, granular control.

You can easily ingest documents or pre-processed chunks, update their content, and delete them when no longer needed. Combined with powerful retrieval and knowledge graph capabilities, R2R enables seamless integration of advanced document management into your applications.