Memgraph logo

Vector Search in Memgraph

Similarity and structure in a single engine. Run vector search and graph traversal together — or use Memgraph alongside your existing vector database.

Modern AI needs both. Memgraph delivers both.

Vector search finds what's semantically similar. Graph traversal finds what's structurally connected. Memgraph handles both in a single engine — with 85% less memory for vector storage.

85%
Memory reduction (Single Store Index)
HNSW
Powered by USearch (C++)
3 metrics
Cosine, L2, and inner product
v3.2+
Production-ready since Memgraph 3.2

Two ways to add graph intelligence to vector search.

Native vector search in Memgraph

No separate vector database, no data duplication, no synchronization overhead.

  • Single Store Vector Index — vectors stored once, not duplicated
  • Node and edge vector indexes
  • Configurable scalar kinds (f32, f16) for precision/memory trade-off

Best for: teams building new systems who want a single engine for hybrid retrieval.
Memgraph + external vector database

Keep Pinecone, Weaviate, Qdrant, Chroma, or Milvus. Add Memgraph as the graph layer.

  • Memgraph handles graph storage and traversal
  • Vector DB handles embedding storage and similarity search
  • No migration required — add Memgraph to your existing stack

Best for: teams with an established vector pipeline who need to add graph reasoning.

How native vector search works in Memgraph.

STEP 1. Create a vector index
CREATE VECTOR INDEX movies_index ON :Movie(embedding)
WITH CONFIG {'dimension': 384, 'capacity': 10000, 'metric': 'cos'};
STEP 2. Search by similarity
CALL vector_search.search('movies_index', 5, $query_vector)
YIELD node, similarity
STEP 3. Combine with graph traversal
CALL vector_search.search('movies_index', 5, $query_vector)
YIELD node, similarity
MATCH (node)-[:DIRECTED_BY]->(director)-[:DIRECTED]->(other)
RETURN other.title, similarity
ORDER BY similarity DESC

One query. Both similarity and structure. No external system required.

Why this matters for AI workloads.

Hybrid retrieval in one query
Find semantically similar entities, then traverse their relationships to build connected context — in a single pipeline, without round-tripping between systems.
No synchronization overhead
When vectors and graph live in the same engine, there’s no ETL between databases, no eventual consistency, no data drift between your vector index and your knowledge graph.
In-memory speed for both
Both vector search and graph traversal run in memory. No disk IO for either operation. Sub-millisecond for both.

Which approach is right for you?

Use native vector search when:
  • Building a new GraphRAG or AI system from scratch
  • Eliminating a separate vector database from your stack
  • Hybrid queries (vector + graph in one query) matter
  • Minimizing operational complexity and memory costs
Use Memgraph + external vector DB when:
  • You already have a vector database in production
  • Specialized features needed (managed scaling, multi-modal embeddings)
  • Adding graph reasoning without migrating your vector pipeline
  • Separate scaling of vector and graph workloads is a priority

Works with your stack.

LangGraph

Memgraph toolkit with 7+ tools for building stateful, multi-actor agent applications with graph-backed state management.

LlamaIndex

Create knowledge graphs from unstructured data and query with natural language via Memgraph graph store.

LightRAG

Fast retrieval-augmented generation combining graph databases with LLMs for creating and querying knowledge graphs.

Add structure to your search.

© 2026 Memgraph Ltd. All rights reserved.