GraphRAG

GraphRAG with Memgraph

LLMs have the knowledge they were trained on. By building a RAG, you're expanding that knowledge with your data. It is important to understand how to structure and model the data and how to find and extract relevant information for LLM to provide more accurate responses personalized to your specific data.

graphrag-memgraph

GraphRAG is a RAG system that combines the strengths of knowledge graphs and LLMs. Knowledge graphs are a structured representation of information where entities and their relationships are organized to enable reasoning and insights.

Here are the main strengths of graph in a RAG system:

  • Relational context - Knowledge graph structure holds the information about semantics.
  • Improved retrieval accuracy - Having retrieval strategies specific to graphs, such as community detection and impact analytics.
  • Multi-hop reasoning - Ability to traverse through data neighborhoods.
  • Efficient information navigation - Scanning subgraphs instead of full datasets.
  • Dynamically evolving knowledge - Updating graph in real time.

Graph structure is a prerequisite for GraphRAG, and a graph database is even better. A GraphRAG application running in production needs scalability, real-time performance, incremental updates and persistence. Having a graph database as a part of the GraphRAG is especially useful if other application parts also rely on the graph database.

Key Memgraph features

Memgraph is a graph database that stores your knowledge graph, and it ensures durability of stored data for backup and recovery. Refer to our graph modeling guide for tips and tricks on building a knowledge graph.

With Memgraph as an in-memory graph database, you can quickly traverse through your graph with deep path traversals and not worry about latency.

You can ingest streaming data into Memgraph with Kafka, Redpanda or Pulsar which you can then query with (dynamic) MAGE algorithms or your custom procedures. That allows you to have a growing knowledge graph that's being updated on-the-fly.

Here are the most useful features in Memgraph to build a GraphRAG:

Here is how those features fit into the GraphRAG architecture:

graphrag

Tools

GraphChat is a Memgraph Lab feature that allows users to extract insights from a graph database by asking questions in plain English. It incorporates elements of GraphRAG. This two-phase Generative AI app first generates Cypher queries from the text and then summarizes the query results in the final response.

graphchat

Integrations

Memgraph offers several integrations with popular AI frameworks to help you customize and build your own GenAI application from scratch. Below are some of the libraries integrated with Memgraph.

LangChain

LangChain is a framework for developing applications powered by large language models (LLMs). Currently, with Memgraph's LangChain integration you can query your graph database with natural language. The example can be found on the LangChain documentation (opens in a new tab).

We are in the process of updating and improving the integration. We added a support to build a knowledge graph from unstructured data and improved schema generation speed. To track progress and speed things up, please upvote the PR on LangChain GitHub (opens in a new tab) 👍.

LlamaIndex

LlamaIndex is a simple, flexible data framework for connecting custom data sources to large language models. Currently, Memgraph's integration (opens in a new tab) supports creating a knowledge graph from unstructured data and querying with natural language. You can follow the example on LlamaIndex docs (opens in a new tab) or go through quick start below.

Installation

To install LlamaIndex and Memgraph graph store, run:

pip install llama-index llama-index-graph-stores-memgraph
Environment setup

Before you get started, make sure you have Memgraph running in the background.

To use Memgraph as the underlying graph store for LlamaIndex, define your graph store by providing the credentials used for your database:

from llama_index.graph_stores.memgraph import MemgraphPropertyGraphStore
 
username = ""  # Enter your Memgraph username (default "")
password = ""  # Enter your Memgraph password (default "")
url = ""  # Specify the connection URL, e.g., 'bolt://localhost:7687'
 
graph_store = MemgraphPropertyGraphStore(
    username=username,
    password=password,
    url=url,
)

Additionally, a working OpenAI key is required:

import os
os.environ["OPENAI_API_KEY"] = "<YOUR_API_KEY>"  # Replace with your OpenAI API key
Dataset

For the dataset, we'll use a text about Charles Darwin stored in the /data/charles_darwin/charles.txt file:

Charles Robert Darwin was an English naturalist, geologist, and biologist,
widely known for his contributions to evolutionary biology. His proposition that
all species of life have descended from a common ancestor is now generally
accepted and considered a fundamental scientific concept. In a joint publication
with Alfred Russel Wallace, he introduced his scientific theory that this
branching pattern of evolution resulted from a process he called natural
selection, in which the struggle for existence has a similar effect to the
artificial selection involved in selective breeding. Darwin has been described
as one of the most influential figures in human history and was honoured by
burial in Westminster Abbey.
from llama_index.core import SimpleDirectoryReader
 
documents = SimpleDirectoryReader("./data/charles_darwin/").load_data()

The data is now loaded in the documents variable which we'll pass as an argument in the next step of index creation and graph construction.

Graph construction

LlamaIndex provides multiple graph constructors (opens in a new tab). In this example, we'll use the SchemaLLMPathExtractor (opens in a new tab), which allows to both predefine the schema or use the one LLM provides without explicitly defining entities.

from llama_index.core import PropertyGraphIndex
from llama_index.embeddings.openai import OpenAIEmbedding
from llama_index.llms.openai import OpenAI
from llama_index.core.indices.property_graph import SchemaLLMPathExtractor
 
index = PropertyGraphIndex.from_documents(
    documents,
    embed_model=OpenAIEmbedding(model_name="text-embedding-ada-002"),
    kg_extractors=[
        SchemaLLMPathExtractor(
            llm=OpenAI(model="gpt-4", temperature=0.0),
        )
    ],
    property_graph_store=graph_store,
    show_progress=True,
)

In the below image, you can see how the text was transformed into a knowledge graph and stored into Memgraph.

llama-index

Querying

Labeled property graphs can be queried in several ways to retrieve nodes and paths and in LlamaIndex, several node retrieval methods at once can be combined.

If no sub-retrievers are provided, the defaults are LLMSynonymRetriever (opens in a new tab).

query_engine = index.as_query_engine(include_text=True)
 
response = query_engine.query("Who did Charles Robert Darwin collaborate with?")
print(str(response))

In the image below, you can see what's happening under the hood to get the answer.

llama-retriever

Resources

Want to learn more?

To learn more, check out Enhancing AI with graph databases and LLMs bootcamp (opens in a new tab) and on-demand resources (opens in a new tab). Stay up to date with Memgraph events (opens in a new tab) and watch videos from the AI, LLMs and GraphRAG YouTube playlist (opens in a new tab).

If you have questions regarding Memgraph or want to provide feedback, join our Discord (opens in a new tab) community.

If you prefer a call, schedule a 30 min session with one of our engineers to discuss how Memgraph fits with your architecture. Our engineers are highly experienced in helping companies of all sizes to integrate and get the most out of Memgraph in their projects. Talk to us about data modeling, optimizing queries, defining infrastructure requirements or migrating from your existing graph database. No nonsense or sales pitch, just tech.