High availability reference queries
This guide provides a complete reference for all commands used to manage a Memgraph High Availability (HA) cluster.
Cluster registration commands
Important: You may choose any coordinator for the initial setup; it automatically becomes the leader. After setup, the choice no longer matters.
All queries can be run on any coordinator. If currently the coordinator is not a leader, the query will be automatically forwarded to the current leader and executed there. This is because the Raft protocol specifies that only the leader should accept changes in the cluster.
ADD COORDINATOR
Adds a coordinator to the cluster.
ADD COORDINATOR coordinatorId WITH CONFIG {
"bolt_server": boltServer,
"coordinator_server": coordinatorServer,
"management_server": managementServer
}; Parameters
coordinatorId(int) Unique ID for each coordinator. Typically incremented sequentially.boltServer(string) External Bolt endpoint:"IP_OR_DNS:PORT". Usually port 7687. Must be reachable by external applications.coordinatorServer(string) Internal raft endpoint:"HOSTNAME_OR_DNS:COORDINATOR_PORT".managementServer(string) Coordinator management endpoint:"HOSTNAME_OR_DNS:MANAGEMENT_PORT".
DNS/FQDN is recommended when IPs are ephemeral (e.g., Kubernetes).
Behavior & implications
- Can be run before or after registering data instances.
- Must be executed once for each coordinator.
- External Bolt server must be reachable; Raft and management endpoints may be internal.
- Writes cluster configuration to Raft log.
Example
ADD COORDINATOR 1 WITH CONFIG {
"bolt_server": "my_outside_coordinator_1_IP:7687",
"coordinator_server": "memgraph-coordinator-1.default.svc.cluster.local:12000",
"management_server": "memgraph-coordinator-1.default.svc.cluster.local:10000"
};REMOVE COORDINATOR
Removes a follower coordinator from the cluster.
REMOVE COORDINATOR coordinatorId;Parameters
coordinatorId(int) — previously registered coordinator ID.
Behavior & implications
- Leader coordinator cannot remove itself. To remove the leader, first trigger a leadership change.
Example
REMOVE COORDINATOR 2;UPDATE CONFIG
Updates the configuration of the data instance/coordinator in the cluster. Use this command to update the DNS of your bolt servers, for example, when migrating to new load balancers.
UPDATE CONFIG FOR ( INSTANCE instanceName | COORDINATOR coordinatorServerId ) configsMap=configMap ;Parameters
instanceName(string) — the name of the data instance for which the configuration changes will be applied.coordinatorServerId(int) — previously registered coordinator ID for which the configuration changes will be applied.configsMaps(int) — Map of KV pairs that signal which field should be updated. Currently, it is only supported to update bolt server of each instance.
Behavior & implications
- Only bolt server can be updated.
Example
UPDATE CONFIG FOR COORDINATOR 1 {'bolt_server': '127.0.0.1:7690'};
UPDATE CONFIG FOR INSTANCE instance_2 {'bolt_server': '127.0.0.1:7688'}REGISTER INSTANCE
Registers a data instance in the cluster.
REGISTER INSTANCE instanceName ( AS ASYNC | AS STRICT_SYNC ) ? WITH CONFIG {
"bolt_server": boltServer,
"management_server": managementServer,
"replication_server": replicationServer
};Parameters
instanceNameUnique symbolic name of the data instance.AS ASYNC/AS STRICT_SYNC(optional) Selects replication mode. Default: SYNC.boltServerExternal Bolt endpoint:"IP_OR_DNS:PORT".managementServerCoordinator → data instance health check endpoint.replicationServerReplication endpoint, typically using port 20000.
Behavior
- Coordinator connects via
management_serverto verify liveness. - Coordinator begins periodic health checks.
- Instance is automatically demoted to REPLICA.
- Replication server is started on the data instance.
- Operation is persisted in Raft.
Replication mode rules
-
Allowed combinations:
STRICT_SYNC+ASYNCSYNC+ASYNC
-
Forbidden combination:
STRICT_SYNC+SYNC
Notes
- In Kubernetes, use service DNS names (e.g.
memgraph-data-1.default.svc.cluster.local). - Local development uses
localhost.
Example
REGISTER INSTANCE instance1 WITH CONFIG {
"bolt_server": "my_outside_instance1_IP:7687",
"management_server": "memgraph-data-1.default.svc.cluster.local:10000",
"replication_server": "memgraph-data-1.default.svc.cluster.local:20000"
};UNREGISTER INSTANCE
Removes a data instance from the cluster.
UNREGISTER INSTANCE instanceName;Parameters
instanceName— name of the data instance.
Implications
- Do not unregister the MAIN instance; this may corrupt cluster state.
- A healthy MAIN must exist during the operation.
- The instance is also removed from MAIN’s replica set.
Example
UNREGISTER INSTANCE instance_1;Replication role management
SET INSTANCE ... TO MAIN
Promotes a replica to MAIN.
SET INSTANCE instanceName TO MAIN;Behavior
- All other registered instances become replicas of the new MAIN.
- Written to Raft log.
Implications
- Fails if a MAIN already exists.
- Fails if any instance is unavailable.
Example
SET INSTANCE instance_0 TO MAIN;DEMOTE INSTANCE
Demotes the current MAIN to a REPLICA.
DEMOTE INSTANCE instanceName;Behavior
- MAIN becomes REPLICA.
- Written to Raft log.
Implications
- Failover is not automatic after demotion. You must manually promote
another instance using
SET INSTANCE ... TO MAIN.
Tip: Combine
DEMOTE INSTANCE+SET INSTANCE ... TO MAINfor manual failover, useful during maintenance.
Example
DEMOTE INSTANCE instance1;Monitoring Commands
SHOW INSTANCES
Displays the state of all servers in the cluster.
SHOW INSTANCES;Output includes
- Network endpoints (bolt, coordinator, management)
- Health state
- Role: MAIN, REPLICA, LEADER, FOLLOWER, or UNKNOWN
- Time since last health ping
Behavior on followers
- Follower attempts to query the leader for accurate state.
- If leader unavailable, follower reports all servers as
"down".
SHOW INSTANCE
Displays information about the coordinator you’re connected to.
SHOW INSTANCE;Output includes
- Instance name
- External Bolt server
- Coordinator (Raft) endpoint
- Management server endpoint
- Cluster role (LEADER/FOLLOWER)
If
ADD COORDINATORhas not been run,bolt_serverwill be empty.
SHOW REPLICATION LAG
Shows replication lag (in committed transactions) for all instances.
SHOW REPLICATION LAG;Implications
- Lag values survive restarts (stored in snapshots + WAL).
- Useful during manual failover to evaluate risk of data loss.
Troubleshooting commands
FORCE RESET CLUSTER STATE
Resets cluster state when the cluster cannot reach a healthy configuration.
FORCE RESET CLUSTER STATE;Behavior
- All alive instances are demoted to REPLICA.
- A new MAIN is selected from alive instances.
- Down instances are demoted after they come back online.
- Writes changes to Raft.