Vector Search in Memgraph
Modern AI needs both. Memgraph delivers both.
Vector search finds what's semantically similar. Graph traversal finds what's structurally connected. Memgraph handles both in a single engine — with 85% less memory for vector storage.
Two ways to add graph intelligence to vector search.
No separate vector database, no data duplication, no synchronization overhead.
- Single Store Vector Index — vectors stored once, not duplicated
- Node and edge vector indexes
- Configurable scalar kinds (f32, f16) for precision/memory trade-off
Keep Pinecone, Weaviate, Qdrant, Chroma, or Milvus. Add Memgraph as the graph layer.
- Memgraph handles graph storage and traversal
- Vector DB handles embedding storage and similarity search
- No migration required — add Memgraph to your existing stack
How native vector search works in Memgraph.
CREATE VECTOR INDEX movies_index ON :Movie(embedding)
WITH CONFIG {'dimension': 384, 'capacity': 10000, 'metric': 'cos'};CALL vector_search.search('movies_index', 5, $query_vector)
YIELD node, similarityCALL vector_search.search('movies_index', 5, $query_vector)
YIELD node, similarity
MATCH (node)-[:DIRECTED_BY]->(director)-[:DIRECTED]->(other)
RETURN other.title, similarity
ORDER BY similarity DESCOne query. Both similarity and structure. No external system required.
Why this matters for AI workloads.
Which approach is right for you?
- Building a new GraphRAG or AI system from scratch
- Eliminating a separate vector database from your stack
- Hybrid queries (vector + graph in one query) matter
- Minimizing operational complexity and memory costs
- You already have a vector database in production
- Specialized features needed (managed scaling, multi-modal embeddings)
- Adding graph reasoning without migrating your vector pipeline
- Separate scaling of vector and graph workloads is a priority
Works with your stack.
LangGraph
Memgraph toolkit with 7+ tools for building stateful, multi-actor agent applications with graph-backed state management.
LlamaIndex
Create knowledge graphs from unstructured data and query with natural language via Memgraph graph store.
LightRAG
Fast retrieval-augmented generation combining graph databases with LLMs for creating and querying knowledge graphs.