Documents

Ingest and manage your documents

R2R provides a powerful and flexible ingestion to process and manage various types of documents. It supports a wide range of file formats—text, documents, PDFs, images, audio, and even video—and transforms them into searchable, analyzable content.

The ingestion process includes parsing, chunking, embedding, and optionally extracting entities and relationships for knowledge graph construction.

This documentation will guide you through:

  • Ingesting files, raw text, or pre-processed chunks
  • Choosing an ingestion mode (fast, hi-res, or custom)
  • Updating and deleting documents and chunks

Refer to the documents API and SDK reference for detailed examples for interacting with documents.

Ingesting Documents

A Document represents ingested content in R2R. When you ingest a file, text, or chunks:

  1. The file (or text) is parsed into text.
  2. Text is chunked into manageable units.
  3. Embeddings are generated for semantic search.
  4. Content is stored for retrieval and optionally linked to the knowledge graph.

Ingestion inside R2R is asynchronous. You can monitor ingestion status and confirm when documents are ready:

$client.documents.list()
[
DocumentResponse(
id=UUID('e43864f5-a36f-548e-aacd-6f8d48b30c7f'),
collection_ids=[UUID('122fdf6a-e116-546b-a8f6-e4cb2e2c0a09')],
owner_id=UUID('2acb499e-8428-543b-bd85-0d9098718220'),
document_type=<DocumentType.PDF: 'pdf'>,
metadata={'title': 'DeepSeek_R1.pdf', 'version': 'v0'},
version='v0',
size_in_bytes=1768572,
ingestion_status=<IngestionStatus.SUCCESS: 'success'>,
extraction_status=<GraphExtractionStatus.PENDING: 'pending'>,
created_at=datetime.datetime(2025, 2, 8, 3, 31, 39, 126759, tzinfo=TzInfo(UTC)),
updated_at=datetime.datetime(2025, 2, 8, 3, 31, 39, 160114, tzinfo=TzInfo(UTC)),
ingestion_attempt_number=None,
summary="The document contains a comprehensive overview of DeepSeek-R1, a series of reasoning models developed by DeepSeek-AI, which includes DeepSeek-R1-Zero and DeepSeek-R1. DeepSeek-R1-Zero utilizes large-scale reinforcement learning (RL) without supervised fine-tuning, showcasing impressive reasoning capabilities but facing challenges like readability and language mixing. To enhance performance, DeepSeek-R1 incorporates multi-stage training and cold-start data, achieving results comparable to OpenAI's models on various reasoning tasks. The document details the models' training processes, evaluation results across multiple benchmarks, and the introduction of distilled models that maintain reasoning capabilities while being smaller and more efficient. It also discusses the limitations of current models, such as language mixing and sensitivity to prompts, and outlines future research directions to improve general capabilities and efficiency in software engineering tasks. The findings emphasize the potential of RL in developing reasoning abilities in large language models and the effectiveness of distillation techniques for smaller models.", summary_embedding=None, total_tokens=29673)] total_entries=1
), ...
]

An ingestion_status of "success" confirms the document is fully ingested. You can also check your R2R dashboard for ingestion progress and status.

Supported File Types

R2R supports ingestion of the following document types:

CategoryFile types
Image.bmp, .heic, .jpeg, .png, .tiff
MP3.mp3
PDF.pdf
CSV.csv
E-mail.eml, .msg, .p7s
EPUB.epub
Excel.xls, .xlsx
HTML.html
Markdown.md
Org Mode.org
Open Office.odt
Plain text.txt
PowerPoint.ppt, .pptx
reStructured Text.rst
Rich Text.rtf
TSV.tsv
Word.doc, .docx
XML.xml

For more details on creating documents, refer to the create document API.

Ingestion Modes

R2R offers three modes of ingestion to allow for maximal customization:

Unprocessed files

A speed-oriented ingestion mode that prioritizes rapid processing with minimal enrichment. Summaries and some advanced parsing are skipped, making this ideal for quickly processing large volumes of documents.

1 file_path = 'path/to/file.txt'
2
3 # export R2R_API_KEY='sk-....'
4
5 ingest_response = client.documents.create(
6 file_path=file_path,
7 ingestion_mode="fast" # fast mode for quick processing
8 )

Pre-Processed Chunks

If you have pre-processed chunks from your own pipeline, you can directly ingest them. This is especially useful if you’ve already divided content into logical segments.

Raw text

1raw_text = "This is my first document."
2client.documents.create(
3 raw_text=raw_text,
4)

Pre-Processed Chunks

1chunks = ["This is my first parsed chunk", "This is my second parsed chunk"]
2client.documents.create(
3 chunks=chunks,
4)

Deleting Documents and Chunks

To remove documents or chunks, call their respective delete methods:

1# Delete a document
2client.documents.delete(document_id)
3
4# Delete a chunk
5client.chunks.delete(chunk_id)

You can also delete documents by specifying filters using the by-filter route.

Conclusion

R2R’s ingestion is flexible and efficient, allowing you to tailor ingestion to your needs:

  • Use fast for quick processing.
  • Use hi-res for high-quality, multimodal analysis.
  • Use custom for advanced, granular control.

You can easily ingest documents or pre-processed chunks, update their content, and delete them when no longer needed. Combined with powerful retrieval and knowledge graph capabilities, R2R enables seamless integration of advanced document management into your applications.

Built with