Memgraph in productionBenchmarking Memgraph

Benchmarking Memgraph against others

At Memgraph, we believe we are building the fastest graph database on the market—and much of that performance comes from our in-memory architecture.

Many disk-based graph databases must write data to disk during every transaction and retrieve it back during reads. While these databases often rely on caching layers to speed up reads, cache invalidation becomes a bottleneck in high-throughput write scenarios. Every new write risks invalidating parts of the cache, leading to more disk reads and slower performance.

In read-only workloads, the performance gap between Memgraph and other databases may appear smaller at first glance. However, even in these scenarios, on-disk graph databases can suffer from unpredictable latency due to cache invalidation, especially if the system has recently handled write operations. This can lead to a poor user experience, with inconsistent query response times depending on whether data is fetched from cache or disk.

Memgraph, on the other hand, consistently performs better by keeping the entire dataset in memory, ensuring predictable latency and fast response times regardless of previous operations or system state.

Introducing mgbench

To support fair and realistic comparisons, we built mgbench, a benchmarking framework located within our core Memgraph repository. Unlike simplistic benchmarks that run a query once and record latency, mgbench focuses on repeatable, controlled testing across different conditions and vendors.

What sets mgbench apart?

  • Eliminates noise from unrelated system activity
  • Differentiates between cold and hot database states
  • Detects how query repetition, data freshness or system-level caching affect results
  • Supports adding new vendors and workloads easily
  • Provides both latency and throughput metrics at scale

Many teams make the mistake of testing queries in isolation. But in real production workloads, especially ones with high write volume, read metrics can be greatly affected. This is where Memgraph’s in-memory model really shines, offering consistent performance even under pressure, where disk-based solutions tend to fall behind.

Realistic + mixed workloads

For the reasons listed above, mgbench supports:

  • Realistic workloads: Simulate production-like distributions of reads, writes, and analytical queries
  • Mixed workloads: Analyze how the combination of different query types affects individual performance metrics.

Ready to test for yourself? Try out mgbench with your own data and workloads, and see how Memgraph performs under realistic production conditions. 🚀

Visualizing results with Benchgraph

While mgbench gives you full control over benchmarking and produces raw results in JSON format, we know that analyzing performance metrics at scale can be time-consuming. That’s why we created Benchgraph—a companion visualization tool for mgbench.

Benchgraph is not required to use mgbench, but it offers a convenient way to visualize and explore benchmark results across a variety of run conditions:

  • Cold vs. hot cache scenarios
  • Isolated query performance vs. realistic workload distributions
  • Latency and throughput trends across vendors and dataset sizes
  • Per query comparison across the whole workload

By turning JSON output into interactive dashboards, Benchgraph helps you identify trends and outliers quickly, spot regressions, and better understand how each database behaves under pressure.

For looking into the published bechmarks, check out our official BenchGraph site!