Debugging Memgraph
Why enable user debugging?
User-driven debugging helps improve Memgraph’s performance and reliability by providing diagnostic data from your environment. This data assists us in reproducing and resolving issues faster, especially for bugs that are hard to replicate.
To help with this, our containers come equipped with user-friendly debugging tools, empowering you to identify and report problems more effectively.
Chose the right debug image
Memgraph provides Docker images built in RelWithDebInfo mode, including tools
like perf, gdb and pgrep. This image is about 10% slower but enables
detailed debugging.
To pull a debug image:
docker image pull <memgraph_version>-relwithdebinfoFor Memgraph MAGE:
docker image pull <MAGE_version>-memgraph-<memgraph_version>-relwithdebinfoAll the images in the RelWithDebInfo build mode have the suffix
-relwithdebinfo.
Run Memgraph in debug mode
Run the Memgraph container in privileged mode to allow debugging tools like
gdb and perf to function:
Below, we can see an example command of how to run a Memgraph container in the privileged mode:
docker container run --name mg --privileged -p 7687:7687 -p 9091:9091 memgraph --log-level=TRACE --also-log-to-stderrAccessing the container
All debugging is performed inside the container. To enter the container, you need to execute the following command.
docker container exec -it -u root memgraph bashThe -u root command is there to enable root privileges inside the
container, necessary for running debugging tools.
Debugging tools overview
Memgraph supports the following debug capabilities:
- Using GDB: Attaching Memgraph with
GDBand inspecting threads - Generating a core dump after Memgraph crashed
- Running Memgraph in GDB inside Docker: Directly running Memgraph or MAGE in GDB inside a Docker container.
- Using
perfto identify performance bottlenecks
Using GDB
GDB and pgrep are already installed packages in the Memgraph container that
has the debug symbols. Since Memgraph is already running there at port 7687,
you can attach GDB to your running Memgraph with the following command:
gdb -p $(pgrep memgraph)Most likely, the Memgraph process will have the PID number 1, but for certainty,
we use pgrep.
Useful GDB commands:
| Name | Description |
|---|---|
CTRL + C | Pausing execution. |
c | Continuing execution. |
info thread | List of all executing threads. |
t <x> | Positioning in a specific thread with number x. |
bt | Prints the backtrace of a thread. |
bt full | Prints the backtrace with extra information. |
frame <x> | Positioning in a particular point in the backtrace with number x. |
list | Prints the 10 lines above and below the source code of the frame. |
up | Go 1 frame up in the backtrace. |
down | Go 1 frame down in the backtrace. |
info locals | Prints local variables of the current frame. |
info args | Prints function arguments of the current frame. |
print $local_var | Prints the value of the variable $local_var. |
In Memgraph, we usually want to see first what are all the threads currently in place. We do that by issuing:
info threadBy identifying a certain thread with a code that could belong to the Memgraph repository, we can issue the command
t <x>where x is the specific thread number.
Seeing the backtrace can be done with the command
btGenerating core dump via Docker
In order to generate a core dump, you need to do a few steps on the host and in the container image.
Set no size limit to the core dump
Initially, you will need to set no boundaries to the size of the core dump that can be generated. This is done with the following command:
ulimit -c unlimitedMount the correct volume
When Memgraph crashes, we would want to get the present core dump file on our
host system. When starting the container, we will provide the appropriate
volume. Additionally, don’t forget to set the --privileged flag as noted in
the previous sections.
docker container run --name mg --privileged -v /home/user/cores:/tmp/cores -p 7687:7687 -p 9091:9091 memgraph:2.16.0_17_050d5c985 --log-level=TRACE --also-log-to-stderrSet up the container
Additionally, in the container, the following commands will need to be executed after the container has started, to be able to generate a correct core dump.
ulimit -c unlimited
mkdir -p /tmp/cores
chmod a+rwx /tmp/cores
echo "/tmp/cores/core.%e.%p.%h.%t" > /proc/sys/kernel/core_patternWhen Memgraph crashes, a core dump will be generated, and you will see it on the host system if you have mounted the volume correctly.
Inspecting the container
The container will need to be started again since we want the same debug symbols to be present, and using an identical image is the most proper way for that. However, we don’t need now the Memgraph process at port 7687, so we will ignore it.
You will need to copy the core dump file into the container with the docker cp
command.
After logging into the container with the root credentials:
docker container exec -it -u root memgraph bashwe will execute GDB with the core dump file provided
gdb /usr/lib/memgraph/memgraph --core=/core.memgraph.filewhere the core.memgraph.file is the name of your core dump file. Possibly,
appropriate permissions will need to be set on the core dump file. You can check
the list of useful GDB commands in the above sections.
To find out more about setting core dumps, you can check this article.
Generating core dump via Docker Compose
The setup with Docker Compose is similar to Docker. You will need to bind the volume, run Memgraph in privileged mode, and make sure you set no size limit on the generated core dump.
Below we can see an example Docker Compose file which can generate a core dump:
services:
memgraph:
image: memgraph:2.16.0_17_050d5c985
container_name: mg
privileged: true
ports:
- "7687:7687"
- "7444:7444"
- "9091:9091"
volumes:
- /home/josipmrden/cores:/tmp/cores
command: ["--log-level=TRACE", "--also-log-to-stderr=true"]
ulimits:
core:
hard: -1
soft: -1
lab:
image: memgraph/lab:latest
container_name: memgraph-lab
ports:
- "3000:3000"
depends_on:
- memgraph
environment:
- QUICK_CONNECT_MG_HOST=memgraph
- QUICK_CONNECT_MG_PORT=7687Running Memgraph in GDB inside Docker
To run Memgraph or MAGE in GDB inside a Docker container, you can use the following commands to override the entry point and create a bind mount for core dumps:
# create a directory with sufficient permissions to save the core on the host
mkdir -p cores
chmod 777 cores
# run memgraph via the debug script by overriding the entry point, and creating a bind mount for the core dump
docker run --rm \
--name memgraph \
-v "$(pwd)/cores:/tmp/cores" \
--entrypoint /usr/lib/memgraph/run_with_gdb.sh \
-p 7687:7687 \
memgraph/memgraph:3.7-relwithdebinfoOptionally, the core dump path can be overridden by setting the CORE_PATH variable:
docker run --rm \
--name memgraph \
-v "$(pwd)/cores:/some/other/dir/cores" \
-e CORE_PATH=/some/other/dir/cores/core.dump \
--entrypoint /usr/lib/memgraph/run_with_gdb.sh \
-p 7687:7687 \
memgraph/memgraph:3.7-relwithdebinfoIn the case where Memgraph crashes, a core dump will be created in the directory
specified by the CORE_PATH variable (tmp/cores/core by default) and the full
backtrace will be printed to the terminal.
Using heaptrack with Docker
All RelWithDebInfo images come with heaptrack installed. You can use it to track the memory usage of Memgraph.
Before starting the container, create a directory to store the heaptrack data:
mkdir -p /tmp/heaptrack
chmod a+rwx /tmp/heaptrackThen start the container with the following command:
docker run -d --rm \
--name memgraph \
-v /tmp/heaptrack:/data \
--entrypoint /usr/bin/heaptrack \
memgraph/memgraph:3.6.1-relwithdebinfo \
--output /data/heaptrack.memgraph \
-- \
/usr/lib/memgraph/memgraphRunning the MAGE container using this method will result in a segmentation fault due to the way
that heaptrack interacts with Python, so memgraph should be launched by heaptrack using the --use-inject flag:
docker run -d --rm \
--name memgraph \
-p 7687:7687 \
-v /tmp/heaptrack:/data \
--entrypoint /usr/bin/heaptrack \
memgraph/memgraph-mage:3.6.1-relwithdebinfo \
--output /data/heaptrack.memgraph \
--use-inject /usr/lib/memgraph/memgraphTo stop memgraph gracefully:
docker exec memgraph bash -c "kill -SIGINT \$(pidof memgraph)"Then the heaptrack GUI can be used to inspect the heaptrack data on the host machine:
heaptrack /tmp/heaptrack/heaptrack.memgraph.gzThe heaptrack GUI can be installed on the host machine by issuing the command (Debian/Ubuntu):
sudo apt install heaptrackor by issuing the command (Fedora):
sudo dnf install heaptrackProfiling with perf
Perfing is the most common operation that is run when Memgraph is hanging or performing slowly.
In the next steps, the instructions are provided on how to check which parts of Memgraph are stalling during query execution, so we can use the information to make the system better.
Prior to performing perf instructions, you will need to bound the Memgraph binary to the local filesystem. You can start Memgraph with the volume binded like this:
docker container run --name mg -p 7687:7687 --privileged -v memgraph-binary:/usr/lib/memgraph <memgraph_image> --log-level=TRACE --also-log-to-stderrRecord performance data
Perfing Memgraph Inside the container, we will need to perf the system. That is done using the following command.
perf record -p $(pgrep memgraph) --call-graph dwarf sleep 5The command will perf the Memgraph process for 5 seconds. Of course, you can
tweak the number by yourself. The command will generate a file called
perf.data in the directory you have performed the command in the container
Install hotspot (GUI tool)
We will need to install a GUI tool called hotspot, that will help us generate
a flamegraph. We can install
hotspot on the host machine by issuing the command:
apt install hotspotIf your machine does not support APT, please check the hotspot repo and follow the installation steps.
Copy `perf.data` to host
After we have perfed the data, we need to get the perf information from the container to the host. That is done with the following command on the host
docker cp <container_id>:<path_to_perf.data> .where the container_id is the Memgraph container ID, and the
path_to_perf.data is the absolute path in the container to the generated
perf.data file.
Link debug symbols
For hotspot to be able to identify debug symbols and draw the flamegraph, it
needs the path to the debug symbols to be the exact one as in the container. In
the container it is the /usr/lib/memgraph/memgraph binary, so we will need to
make a symbolic link from the container volume to the host system:
ln -s <path_to_docker_volume_debug_symbols_binary> /usr/lib/memgraph/memgraphOpen flamegraph
If you did everything correctly, by starting hotspot
hotspot perf.datayou should be able to see a similar flamegraph like in the picture below.

Debugging Memgraph under Kubernetes (k8s)
General commands
To being with, the master of all kubectl commands is:
kubectl get allManaging nodes:
kubectl get nodes --show-labels # Show all nodes and their labels.
kubectl get nodes -o wide # Show additional information about the nodes.
kubectl top nodes # Get the current memory usage.Managing pods:
kubectl get pods --show-labels # Show all pods and their labels.
kubectl get pods -o wide # Inspect how pods get scheduled.
kubectl describe pod <pod-name> # Inspect pod config (args, envs, ...).
kubectl get pod <pod-name> -o yaml # Get pod yaml config.
kubectl exec -it <pod-name> -- /bin/bash # Login to a running pod.
kubectl logs <pod-name> # Get logs for a running pod.
kubectl logs memgraph-data-0-0 | tail -n 100 # Filter last logs from a running pod.
kubectl logs --previous <pod-name> # Get logs from a crashed pod.
kubectl logs <pod-name> -c <container-name> # Get logs from a specific pod, e.g., debugging init containers.
kubectl cp <pod-name>:<pod-path> . # Copy logs from a running pod.kubectl get events --all-namespaces --sort-by='.metadata.creationTimestamp' # List all events by creation time.
kubectl get events --namespace <namespace-name> # List all events in the given namespace.kubectl port-forward <pod-name> <host-port>:<pod-port> # Forward/connect port on host to the pod port.
kubectl cluster-info dump # Dump current cluster state to stdout.kubectl get statefulsets # Show all StatefulSets.
kubectl get pvc # Get all PersistentVolumeClaims.
kubectl get pvc -l app=<statefulset-name> # Get the PersistentVolumeClaims for the StatefulSet.Debugging Memgraph pods
You can attach GDB to a running Memgraph pod using ephemeral debug containers. This approach injects a debug container into an existing pod — no need to redeploy or create a separate privileged pod.
Requirements: kubectl 1.32+, Kubernetes 1.25+ (ephemeral containers must be enabled).
Attach GDB using the debug script
The
debug-memgraph.sh
script automates the entire workflow: it creates an ephemeral container with
root privileges and SYS_PTRACE, installs GDB, finds the Memgraph process, and
attaches to it.
./scripts/debug-memgraph.sh memgraph-data-0-0The script auto-detects the target container name from the pod name (data →
memgraph-data, coordinator → memgraph-coordinator). You can override this
and other options:
# Specify container, namespace, or image explicitly
./scripts/debug-memgraph.sh memgraph-data-0-0 -c memgraph-data -n my-namespace
# Use a custom debug image
DEBUG_IMAGE=ubuntu:24.04 ./scripts/debug-memgraph.sh memgraph-data-0-0Once attached, GDB will continue the process and stop on any crash or signal.
Use bt (backtrace) to inspect the call stack when it stops.
Manual approach (alternative)
If you can’t use ephemeral containers (older Kubernetes versions, or cluster
policies that block kubectl debug), you can deploy a privileged debug pod on
the same node and attach GDB from there.
First, identify which node your target pod is running on:
kubectl get pods -o wideEdit
perf_pod.yaml
and set nodeName to match the target pod’s node:
apiVersion: v1
kind: Pod
metadata:
name: debug
spec:
containers:
- args:
- "3600"
command:
- sleep
image: ubuntu:22.04
name: debug
imagePullPolicy: Always
securityContext:
privileged: true
hostPID: true
nodeName: <node-where-your-pod-is-running> # must match target pod's node
restartPolicy: Neverkubectl apply -f scripts/perf_pod.yamlThe hostPID: true setting gives the debug pod visibility into all processes on
the node. Since multiple Memgraph processes may be running on the same node, use
find-memgraph-pid.sh
to find the correct PID by matching the pod UID against /proc/<pid>/cgroup:
./scripts/find-memgraph-pid.sh memgraph-data-0-0Then exec into the debug pod, install GDB, and attach to the Memgraph process:
kubectl exec -it debug -- bash
apt-get update && apt-get install -y gdb procps
gdb -p <PID>Once GDB stops on a crash or signal, use the GDB commands to investigate. Clean up when done:
kubectl delete pod debugHow the debug script works
Memgraph Helm charts run pods as non-root (uid 101, gid 103) with a restrictive
security context. The debug-memgraph.sh script works around this by:
- Using
kubectl debugwith--profile=sysadminto grantSYS_PTRACE - Applying a custom security profile that overrides
runAsUserto 0 (root) for the ephemeral container only — the Memgraph container is unaffected - Targeting the Memgraph container with
--targetto share its process namespace, making the Memgraph PID visible inside the debug container
Kubernetes official documentation on how to debug running pods covers additional techniques including node-level debugging.
Handling core dumps
When Memgraph crashes, for example, due to segmentation faults (SIGSEGV),
core dumps can provide invaluable insight for debugging. The Memgraph Helm
charts provide an easy way to enable persistent core dump storage using the
createCoreDumpsClaim option.
To enable core dumps, create a values.yaml file with at least the following setting:
createCoreDumpsClaim: trueIf you’re running the Memgraph high-availability chart, you can automatically upload core dumps to S3.
Setting this value to true will also enable the use of GDB inside Memgraph containers when using our provided charts.
This instructs the Helm chart to create a PersistentVolumeClaim (PVC) to store
core dumps generated by the Memgraph process.
Important configuration notes
By default the storage size is 10GiB. Core dumps can be as large as your
node’s total RAM, so it’s recommended to set this explicitly and make sure to
adjust the coreDumpsStorageSize under values.yaml file.
Make sure to use the relwithdebinfo image of Memgraph by setting the
image.tag also under values.yaml file.
Run the following command to install Memgraph with the debugging configuration:
helm install my-release memgraph/memgraph -f values.yamlThe core dumps are written to a mounted volume inside the container (the default
is /var/core/memgraph, it’s possible to tweak that by changing the
coreDumpsMountPath under values.yaml). You can use kubectl exec or
kubectl cp to access the files for post-mortem analysis.
If you have k8s cluster under any major cloud provider + you want to store the dumps under S3, probably the best repo to check out is the core-dump-handler.
Profiling Memgraph in Kubernetes
Profile a Memgraph process running inside a Kubernetes pod using perf and generate flame graphs.
Prerequisites
kubectlconfigured with access to your cluster- A running Memgraph deployment (standalone or HA)
Step 1: Identify the target pod
kubectl get pods -o wide| NAME | READY | STATUS | RESTARTS | AGE | IP | NODE |
|---|---|---|---|---|---|---|
| memgraph-coordinator-1-0 | 1/1 | Running | 0 | 23h | 10.244.3.227 | aks-nodepool1-…000002 |
| memgraph-coordinator-2-0 | 1/1 | Running | 0 | 23h | 10.244.0.173 | aks-nodepool1-…000000 |
| memgraph-coordinator-3-0 | 1/1 | Running | 0 | 23h | 10.244.4.250 | aks-nodepool1-…000003 |
| memgraph-data-0-0 | 1/1 | Running | 1 (22h ago) | 23h | 10.244.2.152 | aks-nodepool1-…000001 |
| memgraph-data-1-0 | 1/1 | Running | 0 | 22m | 10.244.1.199 | aks-nodepool1-…000004 |
In this example, we want to profile memgraph-data-1-0, which is currently the MAIN instance. Note the NODE it is running on — the debug pod must be scheduled on the same node.
Step 2: Deploy the debug pod
Edit perf_pod.yaml and set nodeName to match the target pod’s node:
apiVersion: v1
kind: Pod
metadata:
name: debug
spec:
containers:
- args:
- "3600"
command:
- sleep
image: ubuntu:22.04
name: debug
imagePullPolicy: Always
securityContext:
privileged: true
hostPID: true
nodeName: aks-nodepool1-38123842-vmss000004 # <-- must match target pod's node
restartPolicy: Neverkubectl apply -f scripts/perf_pod.yamlThe debug pod needs privileged: true and hostPID: true so it can see host processes and access /proc/<pid>/cgroup to match processes to pods.
Step 3: Find the Memgraph PID
Since multiple Memgraph processes may be visible from the host PID namespace (due to Kubernetes multi-tenancy), we need to match the correct one to our target pod. The find-memgraph-pid.sh script does this automatically — it resolves the pod’s UID, lists all memgraph processes inside the debug pod, and matches via /proc/<pid>/cgroup:
./scripts/find-memgraph-pid.sh memgraph-data-1-0Output:
Pod: memgraph-data-1-0
UID: c8707c88-631b-467c-af9f-26e9dac8e780
UID fragment: 26e9dac8e780
Debug pod: debug
Found memgraph PIDs: 1335771 1396816
cgroup match: /proc/1396816/cgroup:0::/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod...26e9dac8e780...
Matched memgraph PID: 1396816Use -d to specify a different debug pod name, or -n for a non-default namespace:
./scripts/find-memgraph-pid.sh memgraph-data-1-0 -d my-debug-pod -n memgraph # Default pod name is 'debug', default namespace is 'default'Step 4: Install perf in the debug pod
kubectl exec -it debug -- bashInside the debug pod:
apt-get update && apt-get install -y linux-tools-common linux-tools-genericNote (AKS / cloud kernels):
apt-get install linux-tools-$(uname -r)will fail if the host kernel is a cloud-specific variant (e.g.,5.15.0-1102-azure) because the matching package isn’t in standard Ubuntu repos. Uselinux-tools-genericinstead — the genericperfbinary works in most cases. If it complains about a version mismatch, invoke it directly:/usr/lib/linux-tools/*/perf record ...
Step 5: Record a perf profile
perf record -p <PID> --call-graph dwarf sleep 30Replace <PID> with the PID from Step 3. Adjust the duration (sleep 30) as needed — run your workload during this window.
Step 6: Generate a flame graph
apt-get install -y git
git clone https://github.com/brendangregg/FlameGraph
perf script | ./FlameGraph/stackcollapse-perf.pl > out.perf-folded
./FlameGraph/flamegraph.pl out.perf-folded > perf.svgStep 7: Copy results and clean up
From your local machine:
kubectl cp debug:perf.svg perf.svg
kubectl cp debug:perf.data perf.data # optional: raw perf data for later analysis
kubectl delete pod debugOpen perf.svg in a browser to explore the interactive flame graph.
Specific cloud provider instructions
The k8s quick reference is an amazing set of commands!