ClusteringHigh availabilitySet up HA cluster with Docker Compose

Set up HA cluster with Docker Compose Enterprise

The following example shows you how to setup Memgraph cluster using Docker Compose. The cluster will use user-defined bridge network.

💡

In production, it is always advised to run each instance on its dedicated server. The setup however is very similar. Make sure that the IPs of the machines are accessible inside the cluster in order to do proper registration of the cluster.

Start the Docker Compose

You can directly use the following docker-compose.yml to start the cluster using docker compose up. The only thing you need to modify is put your license organization name, and the enterprise key.

services:
  coord1:
    image: "memgraph/memgraph"
    container_name: coord1
    volumes:
      - mg_lib1:/var/lib/memgraph
      - mg_log1:/var/log/memgraph
    command: [
      "--log-level=TRACE",
      "--also-log-to-stderr=true",
      "--bolt-port=7691",
      "--coordinator-id=1",
      "--coordinator-port=10111",
      "--management-port=12121",
      "--coordinator-hostname=coord1",
      "--nuraft-log-file=/var/log/memgraph/nuraft"
    ]
    networks:
      memgraph_ha:
        ipv4_address: 172.21.0.4
    ports:
      - "7691:7691"
    environment:
      - MEMGRAPH_ORGANIZATION_NAME=<YOUR_ORGANIZATION_NAME>
      - MEMGRAPH_ENTERPRISE_LICENSE=<YOUR_ENTERPRISE_LICENSE>
    depends_on:
      - instance1
      - instance2
      - instance3

  coord2:
    image: "memgraph/memgraph"
    container_name: coord2
    volumes:
      - mg_lib2:/var/lib/memgraph
      - mg_log2:/var/log/memgraph
    command: [
      "--log-level=TRACE",
      "--also-log-to-stderr=true",
      "--bolt-port=7692",
      "--coordinator-id=2",
      "--coordinator-port=10112",
      "--management-port=12122",
      "--coordinator-hostname=coord2",
      "--nuraft-log-file=/var/log/memgraph/nuraft"
    ]
    networks:
      memgraph_ha:
        ipv4_address: 172.21.0.2
    ports:
      - "7692:7692"
    environment:
      - MEMGRAPH_ORGANIZATION_NAME=<YOUR_ORGANIZATION_NAME>
      - MEMGRAPH_ENTERPRISE_LICENSE=<YOUR_ENTERPRISE_LICENSE>
    depends_on:
      - instance1
      - instance2
      - instance3

  coord3:
    image: "memgraph/memgraph"
    container_name: coord3
    volumes:
      - mg_lib3:/var/lib/memgraph
      - mg_log3:/var/log/memgraph
    command: [
      "--log-level=TRACE",
      "--also-log-to-stderr=true",
      "--bolt-port=7693",
      "--coordinator-id=3",
      "--coordinator-port=10113",
      "--management-port=12123",
      "--coordinator-hostname=coord3",
      "--nuraft-log-file=/var/log/memgraph/nuraft"
    ]

    networks:
      memgraph_ha:
        ipv4_address: 172.21.0.3
    ports:
      - "7693:7693"
    environment:
      - MEMGRAPH_ORGANIZATION_NAME=<YOUR_ORGANIZATION_NAME>
      - MEMGRAPH_ENTERPRISE_LICENSE=<YOUR_ENTERPRISE_LICENSE>
    depends_on:
      - instance1
      - instance2
      - instance3

  instance1:
    image: "memgraph/memgraph"
    container_name: instance1
    volumes:
      - mg_lib4:/var/lib/memgraph
      - mg_log4:/var/log/memgraph
    command: [
      "--log-level=TRACE",
      "--also-log-to-stderr=true",
      "--bolt-port=7687",
      "--management-port=13011"
    ]
    networks:
      memgraph_ha:
        ipv4_address: 172.21.0.6
    ports:
      - "7687:7687"
    environment:
      - MEMGRAPH_ORGANIZATION_NAME=<YOUR_ORGANIZATION_NAME>
      - MEMGRAPH_ENTERPRISE_LICENSE=<YOUR_ENTERPRISE_LICENSE>

  instance2:
    image: "memgraph/memgraph"
    container_name: instance2
    volumes:
      - mg_lib5:/var/lib/memgraph
      - mg_log5:/var/log/memgraph
    command: [
      "--log-level=TRACE",
      "--also-log-to-stderr=true",
      "--bolt-port=7688",
      "--management-port=13012"
    ]
    networks:
      memgraph_ha:
        ipv4_address: 172.21.0.7
    ports:
      - "7688:7688"
    environment:
      - MEMGRAPH_ORGANIZATION_NAME=<YOUR_ORGANIZATION_NAME>
      - MEMGRAPH_ENTERPRISE_LICENSE=<YOUR_ENTERPRISE_LICENSE>

  instance3:
    image: "memgraph/memgraph"
    container_name: instance3
    volumes:
      - mg_lib6:/var/lib/memgraph
      - mg_log6:/var/log/memgraph
    command: [
      "--log-level=TRACE",
      "--also-log-to-stderr=true",
      "--bolt-port=7689",
      "--management-port=13013"
    ]
    networks:
      memgraph_ha:
        ipv4_address: 172.21.0.8
    ports:
      - "7689:7689"
    environment:
      - MEMGRAPH_ORGANIZATION_NAME=<YOUR_ORGANIZATION_NAME>
      - MEMGRAPH_ENTERPRISE_LICENSE=<YOUR_ENTERPRISE_LICENSE>

volumes:
  mg_lib1:
  mg_lib2:
  mg_lib3:
  mg_lib4:
  mg_lib5:
  mg_lib6:
  mg_log1:
  mg_log2:
  mg_log3:
  mg_log4:
  mg_log5:
  mg_log6:

networks:
  memgraph_ha:
    name: memgraph_ha
    driver: bridge
    ipam:
      driver: default
      config:
        - subnet: "172.21.0.0/16"

Register the coordinator instances

Next, we proceed first by registering all the coordinators in the cluster.

The following query is a self-registration of coordinator 1, which also needs to be executed:

ADD COORDINATOR 1 WITH CONFIG {
  "bolt_server": "localhost:7691",
  "coordinator_server": "coord1:10111",
  "management_server": "coord1:12121"
};

Next up, we register the two other coordinators in the cluster:

ADD COORDINATOR 2 WITH CONFIG {
  "bolt_server": "localhost:7692",
  "coordinator_server": "coord2:10112",
  "management_server": "coord2:12122"
};
ADD COORDINATOR 3 WITH CONFIG {
  "bolt_server": "localhost:7693", 
  "coordinator_server": "coord3:10113", 
  "management_server": "coord3:12123"
};

We observe that there is indeed one leader and two followers in the cluster:

Register the data instances

We continue by registering the 3 data instances:

REGISTER INSTANCE instance_1 WITH CONFIG {
  "bolt_server": "localhost:7687", 
  "management_server": "instance1:13011", 
  "replication_server": "instance1:10001"
};
REGISTER INSTANCE instance_2 WITH CONFIG {
  "bolt_server": "localhost:7688",
  "management_server": "instance2:13012",
  "replication_server": "instance2:10002"
};
REGISTER INSTANCE instance_3 WITH CONFIG {
  "bolt_server": "localhost:7689",
  "management_server": "instance3:13013",
  "replication_server": "instance3:10003"
};

Set one of the instances as MAIN

We promote one of the instances as MAIN. The rest of them will serve as REPLICAs.

SET INSTANCE instance_3 TO main;

Check cluster state

By issuing again the command:

SHOW INSTANCES;

We can observe that there is indeed one MAIN instance, 2 REPLICA instances, 1 leader coordinator, and 2 follower coordinators in the cluster:

| name          | bolt_server    | coordinator_server | management_server | health | role     | last_succ_resp_ms |
| ------------- | -------------- | ------------------ | ----------------- | ------ | -------- | ----------------  |
| coordinator_1 | localhost:7691 | localhost:10111    | localhost:12121   | up     | leader   |  0                |
| coordinator_2 | localhost:7692 | localhost:10112    | localhost:12122   | up     | follower |  16               |
| coordinator_3 | localhost:7693 | localhost:10113    | localhost:12123   | up     | follower |  25               |
| instance_1    | localhost:7687 |       ""           | localhost:13011   | up     | replica  |  39               |
| instance_2    | localhost:7688 |       ""           | localhost:13012   | up     | replica  |  21               |
| instance_3    | localhost:7689 |       ""           | localhost:13013   | up     | main     |  91               |
💡

For localhost development: Since the host can’t resolve the IP for coordinators and data instances, Bolt servers in Docker Compose setup require bolt_server set to localhost:<port>, instead of 127.0.0.1.

This behaviour is such, because in some Docker Setups, for different machines, the localhost is intercepted, and mapped to the host network automatically, while 127.0.0.1 stays within the container.