High availability reference queries
This guide provides a complete reference for all commands used to manage a Memgraph High Availability (HA) cluster.
Cluster registration commands
Important: All registration commands (adding coordinators and registering data instances) must be executed on the same coordinator. You may choose any coordinator for the initial setup; it automatically becomes the leader. After setup, the choice no longer matters.
ADD COORDINATOR
Adds a coordinator to the cluster.
ADD COORDINATOR coordinatorId WITH CONFIG {
"bolt_server": boltServer,
"coordinator_server": coordinatorServer,
"management_server": managementServer
}; Parameters
coordinatorId(int) Unique ID for each coordinator. Typically incremented sequentially.boltServer(string) External Bolt endpoint:"IP_OR_DNS:PORT". Usually port 7687. Must be reachable by external applications.coordinatorServer(string) Internal raft endpoint:"HOSTNAME_OR_DNS:COORDINATOR_PORT".managementServer(string) Coordinator management endpoint:"HOSTNAME_OR_DNS:MANAGEMENT_PORT".
DNS/FQDN is recommended when IPs are ephemeral (e.g., Kubernetes).
Behavior & implications
- Can be run before or after registering data instances.
- Must be executed once for each coordinator.
- External Bolt server must be reachable; Raft and management endpoints may be internal.
- Writes cluster configuration to Raft log.
Example
ADD COORDINATOR 1 WITH CONFIG {
"bolt_server": "my_outside_coordinator_1_IP:7687",
"coordinator_server": "memgraph-coordinator-1.default.svc.cluster.local:12000",
"management_server": "memgraph-coordinator-1.default.svc.cluster.local:10000"
};REMOVE COORDINATOR
Removes a follower coordinator from the cluster.
REMOVE COORDINATOR coordinatorId;Parameters
coordinatorId(int) — previously registered coordinator ID.
Behavior & implications
- Must be executed on the leader coordinator.
- Leader coordinator cannot remove itself. To remove the leader, first trigger a leadership change.
Example
REMOVE COORDINATOR 2;REGISTER INSTANCE
Registers a data instance in the cluster.
REGISTER INSTANCE instanceName ( AS ASYNC | AS STRICT_SYNC ) ? WITH CONFIG {
"bolt_server": boltServer,
"management_server": managementServer,
"replication_server": replicationServer
};Parameters
instanceNameUnique symbolic name of the data instance.AS ASYNC/AS STRICT_SYNC(optional) Selects replication mode. Default: SYNC.boltServerExternal Bolt endpoint:"IP_OR_DNS:PORT".managementServerCoordinator → data instance health check endpoint.replicationServerReplication endpoint, typically using port 20000.
Behavior
- Coordinator connects via
management_serverto verify liveness. - Coordinator begins periodic health checks.
- Instance is automatically demoted to REPLICA.
- Replication server is started on the data instance.
- Operation is persisted in Raft.
Replication mode rules
-
Allowed combinations:
STRICT_SYNC+ASYNCSYNC+ASYNC
-
Forbidden combination:
STRICT_SYNC+SYNC
Notes
- In Kubernetes, use service DNS names (e.g.
memgraph-data-1.default.svc.cluster.local). - Local development uses
localhost.
Example
REGISTER INSTANCE instance1 WITH CONFIG {
"bolt_server": "my_outside_instance1_IP:7687",
"management_server": "memgraph-data-1.default.svc.cluster.local:10000",
"replication_server": "memgraph-data-1.default.svc.cluster.local:20000"
};UNREGISTER INSTANCE
Removes a data instance from the cluster.
UNREGISTER INSTANCE instanceName;Parameters
instanceName— name of the data instance.
Implications
- Do not unregister the MAIN instance; this may corrupt cluster state.
- A healthy MAIN must exist during the operation.
- The instance is also removed from MAIN’s replica set.
Example
UNREGISTER INSTANCE instance_1;Replication role management
SET INSTANCE ... TO MAIN
Promotes a replica to MAIN.
SET INSTANCE instanceName TO MAIN;Behavior
- All other registered instances become replicas of the new MAIN.
- Written to Raft log.
Implications
- Fails if a MAIN already exists.
- Fails if any instance is unavailable.
Example
SET INSTANCE instance_0 TO MAIN;DEMOTE INSTANCE
Demotes the current MAIN to a REPLICA.
DEMOTE INSTANCE instanceName;Behavior
- MAIN becomes REPLICA.
- Written to Raft log.
Implications
- Failover is not automatic after demotion. You must manually promote
another instance using
SET INSTANCE ... TO MAIN.
Tip: Combine
DEMOTE INSTANCE+SET INSTANCE ... TO MAINfor manual failover, useful during maintenance.
Example
DEMOTE INSTANCE instance1;Monitoring Commands
SHOW INSTANCES
Displays the state of all servers in the cluster.
SHOW INSTANCES;Output includes
- Network endpoints (bolt, coordinator, management)
- Health state
- Role: MAIN, REPLICA, LEADER, FOLLOWER, or UNKNOWN
- Time since last health ping
Behavior on followers
- Follower attempts to query the leader for accurate state.
- If leader unavailable, follower reports all servers as
"down".
SHOW INSTANCE
Displays information about the coordinator you’re connected to.
SHOW INSTANCE;Output includes
- Instance name
- External Bolt server
- Coordinator (Raft) endpoint
- Management server endpoint
- Cluster role (LEADER/FOLLOWER)
If
ADD COORDINATORhas not been run,bolt_serverwill be empty.
SHOW REPLICATION LAG
Shows replication lag (in committed transactions) for all instances. Must be run on the leader.
SHOW REPLICATION LAG;Implications
- Lag values survive restarts (stored in snapshots + WAL).
- Useful during manual failover to evaluate risk of data loss.
Troubleshooting commands
FORCE RESET CLUSTER STATE
Resets cluster state when the cluster cannot reach a healthy configuration.
FORCE RESET CLUSTER STATE;Behavior
- All alive instances are demoted to REPLICA.
- A new MAIN is selected from alive instances.
- Down instances are demoted after they come back online.
- Writes changes to Raft.
Implications
- Must be executed on the leader coordinator.