GraphChat
GraphChat is a natural language querying tool integrated into Memgraph Lab, designed to transform how users interact with graph databases. It is designed for non-technical users while still catering to advanced developers.
By using Large Language Models (LLMs), such as OpenAI’s GPT-4, it picks one of the available tools to query your graph, retrieve storage info, and run built-in algorithms like PageRank, delivering precise and actionable results.
How it works
Using GraphChat involves three key steps:
-
Establish an LLM connection: Set up a connection by choosing an LLM provider, selecting a model, and configuring authentication and usage settings. Each connection includes details like the endpoint, headers, retry logic, and context preferences.
-
Start chatting: Once connected, open the chat interface. You can create multiple threads to organize conversations by topic, question type, or model comparison. Each question–answer pair is stored as an exchange, which can be reused as context in future prompts.
-
Let GraphChat handle the rest: GraphChat automatically selects the most appropriate tool—whether generating a Cypher query, running an algorithm, or inspecting metadata. You can review, adjust, or expand any answer to inspect the LLM’s reasoning process and control the prompt context.
From Memgraph 2.16, GraphChat doesn’t require MAGE installed. For schema
information, GraphChat uses SHOW SCHEMA INFO
query, if available. If the SHOW SCHEMA INFO
query is not enabled it will try using the schema-related
procedures. If none of the above
works, it will run Cypher queries.
Setting up an LLM connection
Before using GraphChat, you must have data stored in Memgraph and you need to configure at least one LLM connection. Each connection is defined by:
- Provider
- Connection name
- Endpoint
- Optional custom headers for authentication
- A model configuration to control how the model responds
Providers
GraphChat supports connections to the following providers:
- Ollama - Requires a locally deployed Ollama model
- OpenAI - Requires a ChatGPT-4 account
- Azure OpenAI - Requires an Azure OpenAI Service account
- DeepSeek - Requires a DeepSeek account
Ollama
For local LLM model setup, you can use Ollama:
- Provide the local endpoint URL, such as
http://localhost:11434
.
If you are having issues connecting to Ollama, try using host.docker.internal
instead of localhost
or 127.0.0.1
. Additional settings may be required if
you are using
Docker
or Docker Compose to run
Memgraph and Memgraph Lab.
Learn more about Ollama and how to set it up for local LLM model use:
Ensure you follow the appropriate guidelines and documentation when setting up these connections to take full advantage of the GraphChat capabilities within Memgraph Lab.
OpenAI
Use OpenAI’s models for processing natural language queries. Set up a connection to OpenAI by providing a valid OpenAI key.
Azure OpenAI
Set up a connection to Azure OpenAI by providing:
azureOpenApiVersion
: Your Azure OpenAI service version. Find the list of versions here.azureOpenApiApiKey
: Your Azure OpenAI API keyazureOpenApiInstanceName
: Your Azure OpenAI instance nameazureOpenApiDeploymentName
: Your Azure OpenAI deployment name.
Additional Azure OpenAI integration details can be found in the Azure OpenAI documentation.
Model configuration
You can connect to the same provider multiple times using different models, depending on your specific use case. Once the connection is established, the chat interface will display the current conversation and its associated threads on the left.
You can fine-tune how each model behaves by adjusting the following configuration parameters from the chat interface or in the Lab Settings:
- Retry limit
- Temperature
- Max tokens
- TopP
- Frequency penalty
- Presence penalty
Additional settings allow for more control:
- Max previous exchanges – Limit how many prior messages GraphChat includes in each prompt to provide context-aware responses. Including more history can improve continuity, but may increase response costs.
- Tool usage step limit – Set the maximum number of reasoning steps GraphChat can take when using tools to answer a question. More steps enable deeper problem-solving but may increase latency and usage costs.
- LLM permissions - Limit the changes GraphChat can make to your database.
- Additional context – Include extra contextual information as needed.
You can also create multiple configurations for the same model to suit different use cases.
Chat interface
GraphChat lets you create multiple threads to keep conversations organized—whether you’re exploring different topics or comparing results from various models.
Each question–answer pair is called an exchange. GraphChat can include previous exchanges in the prompt context to make responses more relevant. You have control over how that context is built:
- Default: The last five messages are included automatically.
- Customizable: In the model configuration, you can adjust the number of previous exchanges or manually exclude specific ones.
When asking a question, GraphChat shows how many previous exchanges will be used. To adjust this:
- Exclude specific exchanges using the thumbs down icon.
- Update the max previous exchanges parameter in the model configuration.
Coming soon: You’ll be able to manually select or deselect specific previous exchanges directly from the conversation view for even more customization.
To generate responses, GraphChat leverages prompt context and built-in tools. Exploring exchanges gives you deeper insight into the LLM’s reasoning process.
Prompt context
When you pose a question, GraphChat builds a prompt context for the LLM that includes:
- Introduction
- Schema
- Constraints
- Cypher-specific notes
- Additional (user-provided) context
Built-in tools
GraphChat gives models access to built-in tools that allow them to query your data, inspect the database, and run algorithms:
run-cypher-query
: Generate and execute a Cypher query.run-page-rank
: Identify the most connected and impactful node using PageRank.run-betweenness-centrality
: Determine which nodes serve as critical bridges in the graph.show-config
: List all Memgraph database configuration parameters.show-schema-info
: Display the full database schema (requires--schema-info-enabled
flag on startup).show-storage-info
: View memory usage, available memory, disk usage, and counts of nodes and relationships.show-indexes
: List all database indexes.show-constraints
: List all defined constraints.show-triggers
: List all active triggers.
Tool usage before Lab 3.3
In earlier versions, GraphChat only used the run-cypher-query
tool. This tool:
- Generates and runs Cypher queries from LLM prompts
- Automatically retries invalid queries up to the retry limit defined in the model configuration
If you want to replicate this behavior, disable all other tools in the configuration.
Enhanced tool support from Lab 3.3 onward
Starting with Lab 3.3, GraphChat can use all built-in tools automatically. The LLM decides which tool to use based on the context of the question.
You can also define your own custom tools by providing:
- A tool name
- A description
- A parameterized Cypher query
Ensure each custom tool has a distinct name and description so the LLM can choose the right one.
Explore exchanges
After receiving an answer, you can expand it to inspect what the LLM did behind the scenes. If a tool was used, you’ll see which tool was called and the raw response from the Assistant.
You can also explore:
- Number of previous exchanges, tokens, and tools used
- Schema used – Click to visualize the included schema
- Context used – Click to view the full prompt context sent to the LLM