Memgraph logo
Back to blog
Pushing MCP Forward: What’s New in the Memgraph MCP Server

Pushing MCP Forward: What’s New in the Memgraph MCP Server

By Ante Javor
4 min readNovember 24, 2025

Earlier this year, we introduced the initial implementation of Memgraph MCP Server.  It is a lightweight server implementation of the Model Context Protocol (MCP) that bridges Memgraph with modern LLMs, enabling agents to query and explore graph data through natural language.

Since its release, the adoption of MCP has grown incredibly fast. Support for new features is rolling out, and servers are evolving accordingly.

We are also ensuring that our server evolves with the MCP versions, so that we can enable the community and customers building with Memgraph to have the latest features at their disposal.

Today, we’re excited to share what’s new in the Memgraph MCP Server and how we're evolving it.

What Has Changed?

1. A redesigned and modular server architecture

Our PyPI package will now come with multiple servers, the server is now built around pluggable implementations:

  • server: stable production server
  • memgraph-experimental: experiment ground for MCP features (sampling, elicitation, adaptive index management)
  • … Any server community is willing to create 🛠️

Switching servers is now as simple as:

MCP_SERVER=memgraph-experimental uv run mcp-memgraph

This allows us to ship fast-moving features without disrupting the production server. If you examine the clients, most of them just support tools, so the primary server should be a bit conservative until the clients catch up.

2. Full integration with Memgraph Toolbox

The MCP server now exposes a growing set of tools directly through MCP each backed by the memgraph-toolbox:

ToolDescription
run_queryGeneral Cypher execution (with optional read-only mode)
get_schemaFetch the graph schema
get_indexFetch the graph indexes
get_constraintFetch the graph constraints
get_configurationFetch Memgraph configuration
get_storageFetch storage information
get_triggersFetch Memgraph triggers
get_page_rankRun PageRank and return the top n nodes
get_betweenness_centralityRun betweenness centrality
get_node_neighborhoodLocal relevance expansion
search_node_vectorsVector search on Memgraph

These tools enable LLMs to perform new operations, such as searching node vectors.

There is an ongoing debate about whether we should expand or contract the tool specification in the production server due to the context being eaten by the tool, or we should decrease the tools.

There are a lot more tools we could add, but now we have more options with the plugable servers.

3. A Safer, Stricter Read-Only Mode

LLMs are powerful and can get funny with running different Cypher queries

To prevent accidental writes, we added a strict read-only mode blocking:

  • CREATE
  • MERGE
  • DELETE
  • SET
  • DROP
  • REMOVE

Any attempt returns a structured error:

{
"error": "Write operations are not allowed in read-only mode",
"hint": "Set MCP_READ_ONLY=false to enable write operations"
}

Developers can flip the flag if they want the agent to create or mutate graph data:

export MCP_READ_ONLY=false

This work only with the production server.

4. Support for Both Streamable HTTP and STDIO

This makes the server compatible for use in different environments.

To use the HTTP mode, simply start the server via Docker or uv, HTTP is the default:

docker run --rm -p 8000:8000 mcp-memgraph

After that, the MCP server should be accessible at:

http://localhost:8000/mcp/

If you want to change the configuration, you can use the MCP_TRANSPORT env.

Here is the example:

docker run --rm -i -e MCP_TRANSPORT=stdio mcp-memgraph

5. A New Developer-Friendly Installation Path Using uv

To run the server locally:

uv run --with mcp-memgraph --python 3.13 mcp-memgraph

This is now the recommended method for running without Docker. For example,

To set up the Memgraph MCP server  to the Claude Desktop, a minimal Claude config would look like this:

{

"mcpServers": {

"mcp-memgraph": {

"command": "uv",

"args": [

"run",

"--with", "mcp-memgraph",
"--python", "3.13",
"mcp-memgraph"
]
}
}
}

Docker Improvements

You can now build and run your own version of the server directly from the ai-toolkit monorepo:

docker build -f integrations/mcp-memgraph/Dockerfile -t mcp-memgraph:latest .

docker run --rm -p 8000:8000 mcp-memgraph:latest

Or for stdio (Claude / VS Code):

docker run --rm -i -e MCP_TRANSPORT=stdio mcp-memgraph:latest

We also added support for:

  • external hosts
  • authentication
  • non-default database names

Example:

docker run --rm \
-e MEMGRAPH_URL=bolt://memgraph.internal:7687 \
-e MEMGRAPH_USER=myuser \
-e MEMGRAPH_PASSWORD=secret \
mcp-memgraph

What’s Next?

We’re working on an experimental server, integrating FastMCP’s sampling & elicitation. The idea is to use it as a proof of concept that will ultimately be incorporated into the production Memgraph server. We are actively exploring this direction, and we welcome your feedback.

Whether you're building a knowledge graph assistant, an agent that maintains data consistency, or an AI that analyzes real-time graph data, the Memgraph MCP Server now provides a clean, safe, and robust foundation.

Install with uv, use Docker, or connect from VS Code or Claude Desktop to start chatting with your graph. Tell us what you think!

Join us on Discord!
Find other developers performing graph analytics in real time with Memgraph.
© 2025 Memgraph Ltd. All rights reserved.