Kubernetes
Memgraph can be deployed on Kubernetes. The easiest way to do that is with Helm, the package manager for Kubernetes. Helm uses a packaging format called charts. A chart is a collection of files that describe a related set of Kubernetes resources.
Currently, we prepared and released the following charts:
The Helm charts are published on Artifact Hub. For details on the implementation of the Helm charts, check Memgraph Helm charts repository.
Due to numerous possible use cases and deployment setups via Kubernetes, the provided Helm charts are a starting point you can modify according to your needs. This page will highlight some of the specific parts of the Helm charts that you might want to adjust.
Memgraph standalone Helm chart
Memgraph is a stateful application (database), hence the Helm chart for
standalone
Memgraph is
configured to deploy Memgraph as a Kubernetes StatefulSet workload.
It will deploy a single Memgraph instance in a single pod.
Typically, when deploying a stateful application like Memgraph, a StatefulSet
workload is used to ensure that each pod has a unique identity and stable
network identity. When deploying Memgraph, it is also necessary to define a
PersistentVolumeClaims to store the data
directory (/var/lib/memgraph).
This enables the data to be persisted even if the pod is restarted or deleted.
Storage configuration
By default, the Helm chart will create a PersistentVolumeClaim (PVC) for
storage and logs. If the storage class for PVC is not defined, PVC will use the
default one available in the cluster. The storage class can be configured in the
values.yaml file. To avoid losing your data, make sure you have Retain
reclaim policy set on your storage class. If you delete PersistentVolumeClaim
without having Retain reclaim policy, you will lose your data because
PersistentVolume will get deleted too. The alternative to creating a new
storage class is to patch your existing storage class by applying the Retain
policy. This is necessary because the default Kubernetes policy is Delete. The
patching can be done using the following bash script:
#!/bin/bash
# Get all Persistent Volume names
PVS=$(kubectl get pv --no-headers -o custom-columns=":metadata.name")
# Loop through each PV and patch it
for pv in $PVS; do
echo "Patching PV: $pv"
kubectl patch pv $pv -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
done
echo
An example of a storage class for AWS EBS volumes:
```yaml
storageClass:
name: "gp2"
provisioner: "kubernetes.io/aws-ebs"
storageType: "gp2"
fsType: "ext4"
reclaimPolicy: "Retain"
volumeBindingMode: "Immediate"Default template for a storage class is part of the Helm chart and can be found
in the repository. If you don’t want to create a new storage class, set
storageClass.create to false.
More details on the configuration options can be found in the configuration section.
Secrets
The Helm chart allows you to use Kubernetes secrets to store Memgraph
credentials. By default, the secrets are disabled. If you want to use secrets,
you can enable them in the values.yaml file.
The secrets are prepared to work for environment variables MEMGRAPH_USER and MEMGRAPH_PASSWORD.
Probes
Memgraph standalone chart uses startup, readiness and liveness probes. The startup probe is used to determine when a container application has started. The liveness probe is used to determine when a container should be restarted. The readiness probe is used to determine when a container is ready to start accepting traffic. The startup probe will succeed only after the recovery of the Memgraph has finished. Liveness and readiness probes will start after the startup probe succeeds. By default, the startup probe has to succeed within 2 hours. If the recovery from backup takes longer than that, update the configuration to the value that is high enough. The liveness and readiness probe have to succeed at least once in 5 minutes for a pod to be considered ready.
System configuration
The Helm chart will set the linux kernel vm.max_map_count parameter to 262144 by default
to ensure Memgraph won’t run into issues with memory mapping.
The vm.max_map_count parameter is a kernel parameter that specifies the maximum number of memory map areas a process may have.
This change will be applied to all nodes in the cluster. If you want to disable this feature, you can set sysctlInitContainer.enabled to false in the values.yaml file.
Installing Memgraph standalone Helm chart
To include a standalone Memgraph into your Kubernetes cluster, you need to add the repository and install Memgraph.
The steps below will work in the Minikube environment, but you can also use them in other Kubernetes environments with minor adjustments.
Add the repository
Add the Memgraph Helm chart repository to your local Helm setup by running the following command:
helm repo add memgraph https://memgraph.github.io/helm-chartsMake sure to update the repository to fetch the latest Helm charts available:
helm repo updateInstall Memgraph
To install Memgraph Helm chart, run the following command:
helm install <release-name> memgraph/memgraphReplace <release-name> with the name of the release you chose.
When installing a chart, it’s best practice to specify the exact version you want to use. Using the latest tag can lead to issues, as a pod restart may pull a newer image, potentially causing unexpected changes or incompatibilities.
Install Memgraph standalone chart with minikube
If you are installing Memgraph standalone chart locally with minikube, we are strongly recommending to enable csi-hostpath-driver and use its storage class. Otherwise,
you could have problems with attaching PVCs to pods.
- Enable
csi-hostpath-driver
minikube addons disable storage-provisioner
minikube addons disable default-storageclass
minikube addons enable volumesnapshots
minikube addons enable csi-hostpath-driver- Create a storage class with
csi-hostpath-driveras a provider (file sc.yaml)
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-hostpath-delayed
provisioner: hostpath.csi.k8s.io
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete-
kubectl apply -f sc.yaml -
Set
storageClassNametocsi-hostpath-delayedinvalues.yaml
Access Memgraph
Once Memgraph is installed, you can access it using the provided services and
endpoints, such as various client libraries, command-line
interface mgconsole or visual user interface Memgraph
Lab.
Monitoring
Memgraph’s standalone chart integrates with Kubernetes monitoring tools through:
- The
kube-prometheus-stackHelm chart - Memgraph’s Prometheus exporter
The chart kube-prometheus-stack should be installed independently from HA chart with the following command:
helm install kube-prometheus-stack prometheus-community/kube-prometheus-stack \
-f kube_prometheus_stack_values.yaml \
--namespace monitoring \
--create-namespacekube_prometheus_stack_values.yaml is optional. A template is available in the upstream chart’s repository.
If you install the kube-prometheus-stack in a non-default namespace, allow cross-namespace scraping. You can allow this by adding the following configuration to your kube_prometheus_stack_values.yaml file:
prometheus:
prometheusSpec:
serviceMonitorSelectorNilUsesHelmValues: falseIn order to use Memgraph’s Prometheus exporter and ServiceMonitor make sure to update values.yaml configuration file:
prometheus:
enabled: true
namespace: monitoring
memgraphExporter:
port: 9115
pullFrequencySeconds: 5
repository: memgraph/mg-exporter
tag: 0.2.1
serviceMonitor:
kubePrometheusStackReleaseName: kube-prometheus-stack
interval: 15sIf you set prometheus.enabled to false, resources from charts/memgraph/templates/mg-exporter.yaml will still be installed into the monitoring namespace.
Refer to the configuration table later in the document for details on all parameters.
To uninstall kube-prometheus-stack, run:
helm uninstall kube-prometheus-stack --namespace monitoringNOTE: The stack’s CRDs are not deleted automatically and must be removed manually:
kubectl delete crd alertmanagerconfigs.monitoring.coreos.com
kubectl delete crd alertmanagers.monitoring.coreos.com
kubectl delete crd podmonitors.monitoring.coreos.com
kubectl delete crd probes.monitoring.coreos.com
kubectl delete crd prometheusagents.monitoring.coreos.com
kubectl delete crd prometheuses.monitoring.coreos.com
kubectl delete crd prometheusrules.monitoring.coreos.com
kubectl delete crd scrapeconfigs.monitoring.coreos.com
kubectl delete crd servicemonitors.monitoring.coreos.com
kubectl delete crd thanosrulers.monitoring.coreos.comConfiguration options
The following table lists the configurable parameters of the Memgraph HA chart and their default values.
| Parameter | Description | Default |
|---|---|---|
image.repository | Memgraph Docker image repository | memgraph/memgraph |
image.tag | Specific tag for the Memgraph Docker image. Overrides the image tag whose default is chart version. | 3.1.0 |
image.pullPolicy | Image pull policy | IfNotPresent |
env.MEMGRAPH_ENTERPRISE_LICENSE | Memgraph enterprise license | <your-license> |
env.MEMGRAPH_ORGANIZATION_NAME | Organization name | <your-organization-name> |
memgraphUserId | The user id that is hardcoded in Memgraph and Mage images | 101 |
memgraphGroupId | The group id that is hardcoded in Memgraph and Mage images | 103 |
storage.libPVCSize | Size of the storage PVC | 1Gi |
storage.libStorageClassName | The name of the storage class used for storing data. | "" |
storage.libStorageAccessMode | Access mode used for lib storage. | ReadWriteOnce |
storage.logPVCSize | Size of the log PVC | 1Gi |
storage.logStorageClassName | The name of the storage class used for storing logs. | "" |
storage.logStorageAccessMode | Access mode used for log storage. | ReadWriteOnce |
externalAccess.coordinator.serviceType | IngressNginx, NodePort, CommonLoadBalancer or LoadBalancer. By default, no external service will be created. | "" |
externalAccess.coordinator.annotations | Annotations for external services attached to coordinators. | {} |
externalAccess.dataInstance.serviceType | IngressNginx, NodePort or LoadBalancer. By default, no external service will be created. | "" |
externalAccess.dataInstance.annotations | Annotations for external services attached to data instances. | {} |
headlessService.enabled | Specifies whether headless services will be used inside K8s network on all instances. | false |
ports.boltPort | Bolt port used on coordinator and data instances. | 7687 |
ports.managementPort | Management port used on coordinator and data instances. | 10000 |
ports.replicationPort | Replication port used on data instances. | 20000 |
ports.coordinatorPort | Coordinator port used on coordinators. | 12000 |
affinity.unique | Schedule pods on different nodes in the cluster | false |
affinity.parity | Schedule pods on the same node with maximum one coordinator and one data node | false |
affinity.nodeSelection | Schedule pods on nodes with specific labels | false |
affinity.roleLabelKey | Label key for node selection | role |
affinity.dataNodeLabelValue | Label value for data nodes | data-node |
affinity.coordinatorNodeLabelValue | Label value for coordinator nodes | coordinator-node |
container.data.livenessProbe.tcpSocket.port | Port used for TCP connection. Should be the same as bolt port. | 7687 |
container.data.livenessProbe.failureThreshold | Failure threshold for liveness probe | 20 |
container.data.livenessProbe.timeoutSeconds | Timeout for liveness probe | 10 |
container.data.livenessProbe.periodSeconds | Period seconds for readiness probe | 5 |
container.data.readinessProbe.tcpSocket.port | Port used for TCP connection. Should be the same as bolt port. | 7687 |
container.data.readinessProbe.failureThreshold | Failure threshold for readiness probe | 20 |
container.data.readinessProbe.timeoutSeconds | Timeout for readiness probe | 10 |
container.data.readinessProbe.periodSeconds | Period seconds for readiness probe | 5 |
container.data.startupProbe.tcpSocket.port | Port used for TCP connection. Should be the same as bolt port. | 7687 |
container.data.startupProbe.failureThreshold | Failure threshold for startup probe | 1440 |
container.data.startupProbe.timeoutSeconds | Timeout for probe | 10 |
container.data.startupProbe.periodSeconds | Period seconds for startup probe | 10 |
container.data.terminationGracePeriodSeconds | Grace period for data pod termination | 1800 |
container.coordinators.livenessProbe.tcpSocket.port | Port used for TCP connection. Should be the same as bolt port. | 12000 |
container.coordinators.livenessProbe.failureThreshold | Failure threshold for liveness probe | 20 |
container.coordinators.livenessProbe.timeoutSeconds | Timeout for liveness probe | 10 |
container.coordinators.livenessProbe.periodSeconds | Period seconds for readiness probe | 5 |
container.coordinators.readinessProbe.tcpSocket.port | Port used for TCP connection. Should be the same as bolt port. | 12000 |
container.coordinators.readinessProbe.failureThreshold | Failure threshold for readiness probe | 20 |
container.coordinators.readinessProbe.timeoutSeconds | Timeout for readiness probe | 10 |
container.coordinators.readinessProbe.periodSeconds | Period seconds for readiness probe | 5 |
container.coordinators.startupProbe.tcpSocket.port | Port used for TCP connection. Should be the same as bolt port. | 12000 |
container.coordinators.startupProbe.failureThreshold | Failure threshold for startup probe | 1440 |
container.coordinators.startupProbe.timeoutSeconds | Timeout for probe | 10 |
container.coordinators.startupProbe.periodSeconds | Period seconds for startup probe | 10 |
container.coordinators.terminationGracePeriodSeconds | Grace period for coordinators pod termination | 1800 |
data | Configuration for data instances | See data section |
coordinators | Configuration for coordinator instances | See coordinators section |
sysctlInitContainer.enabled | Enable the init container to set sysctl parameters | true |
sysctlInitContainer.maxMapCount | Value for vm.max_map_count to be set by the init container | 262144 |
secrets.enabled | Enable the use of Kubernetes secrets for Memgraph credentials | false |
secrets.name | The name of the Kubernetes secret containing Memgraph credentials | memgraph-secrets |
secrets.userKey | The key in the Kubernetes secret for the Memgraph user, the value is passed to the MEMGRAPH_USER env. | USER |
secrets.passwordKey | The key in the Kubernetes secret for the Memgraph password, the value is passed to the MEMGRAPH_PASSWORD. | PASSWORD |
resources.coordinators | CPU/Memory resource requests/limits. Left empty by default. | {} |
resources.data | CPU/Memory resource requests/limits. Left empty by default. | {} |
prometheus.enabled | If set to true, K8s resources representing Memgraph’s Prometheus exporter will be deployed. | false |
prometheus.namespace | The namespace in which kube-prometheus-stack and Memgraph’s Prometheus exporter are installed. | monitoring |
prometheus.memgraphExporter.port | The port on which Memgraph’s Prometheus exporter is available. | 9115 |
prometheus.memgraphExporter.pullFrequencySeconds | How often will Memgraph’s Prometheus exporter pull data from Memgraph instances. | 5 |
prometheus.memgraphExporter.repository | The repository where Memgraph’s Prometheus exporter image is available. | memgraph/prometheus-exporter |
prometheus.memgraphExporter.tag | The tag of Memgraph’s Prometheus exporter image. | 0.2.1 |
prometheus.serviceMonitor.enabled | If enabled, a ServiceMonitor object will be deployed. | true |
prometheus.serviceMonitor.kubePrometheusStackReleaseName | The release name under which kube-prometheus-stack chart is installed. | kube-prometheus-stack |
prometheus.serviceMonitor.interval | How often will Prometheus pull data from Memgraph’s Prometheus exporter. | 15s |
labels.coordinators.podLabels | Enables you to set labels on a pod level. | {} |
labels.coordinators.statefulSetLabels | Enables you to set labels on a stateful set level. | {} |
labels.coordinators.serviceLabels | Enables you to set labels on a service level. | {} |
updateStrategy.type | Update strategy for StatefulSets. Possible values are RollingUpdate and OnDelete | RollingUpdate |
extraEnv.data | Env variables that users can define and are applied to data instances | [] |
extraEnv.coordinators | Env variables that users can define and are applied to coordinators | [] |
initContainers.data | Init containers that users can define that will be applied to data instances. | [] |
initContainers.coordinators | Init containers that users can define that will be applied to coordinators. | [] |
tolerations.coordinators | Applied to a coordinator pod and allows the pod to be scheduled on nodes with matching taints. | [] |
tolerations.data | Applied to a data pod and allows the pod to be scheduled on nodes with matching taints. | [] |
For the data and coordinators sections, each item in the list has the following parameters:
| Parameter | Description | Default |
|---|---|---|
id | ID of the instance | 0 for data, 1 for coordinators |
args | List of arguments for the instance | See args section |
The args section contains a list of arguments for the instance.
For all available database settings, refer to the configuration settings docs.
Memgraph high availability Helm chart
Please continue on our Memgraph high availability Helm chart setup.
Memgraph Lab Helm chart
A Helm chart for deploying Memgraph Lab on Kubernetes.
Installing the Memgraph Lab Helm chart
To install the Memgraph Lab Helm chart, follow the steps below:
helm install <release-name> memgraph/memgraph-labReplace <release-name> with a name of your choice for the release.
Changing the default chart values
To change the default chart values, run the command with the specified set of flags:
helm install <resource-name> memgraph/memgraph-lab --set <flag1>=<value1>,<flag2>=<value2>,...Or you can modify a values.yaml file and override the desired values:
helm install <resource-name> memgraph/memgraph-lab -f values.yamlConfiguration options
The following table lists the configurable parameters of the Memgraph Lab chart and their default values.
| Parameter | Description | Default |
|---|---|---|
image.repository | Memgraph Lab Docker image repository | memgraph/memgraph-lab |
image.tag | Specific tag for the Memgraph Lab Docker image. Overrides the image tag whose default is chart version. | "" (Defaults to chart’s app version) |
image.pullPolicy | Image pull policy | IfNotPresent |
replicaCount | Number of Memgraph Lab instances to run. | 1 |
service.type | Kubernetes service type | ClusterIP |
service.port | Kubernetes service port | 3000 |
service.targetPort | Kubernetes service target port | 3000 |
service.protocol | Protocol used by the service | TCP |
service.annotations | Annotations to add to the service | {} |
podAnnotations | Annotations to add to the pod | {} |
resources | CPU/Memory resource requests/limits. Left empty by default. | {} (See note on uncommenting) |
serviceAccount.create | Specifies whether a service account should be created | true |
serviceAccount.annotations | Annotations to add to the service account | {} |
serviceAccount.name | The name of the service account to use. If not set and create is true, a name is generated. | "" |
secrets.enabled | Enable the use of Kubernetes secrets. Will be injected as env variables. | false |
secrets.name | The name of the Kubernetes secret that will be used. | memgraph-secrets |
secrets.keys | Keys from the secrets.name that will be stored as env variables inside the pod. | [] |
Memgraph Lab can be further configured with environment variables in your
values.yaml file.
env:
- name: QUICK_CONNECT_MG_HOST
value: memgraph
- name: QUICK_CONNECT_MG_PORT
value: "7687"
- name: KEEP_ALIVE_TIMEOUT_MS
value: 65000In case you added Nginx Ingress service or web server for a reverse proxy, update the following proxy timeout annotations to avoid potential timeouts:
proxy_read_timeout X;
proxy_connect_timeout X;
proxy_send_timeout X;where X is the number of seconds the connection (request query) can be alive.
Additionally, update the Memgraph Lab KEEP_ALIVE_TIMEOUT_MS environment
variable to a higher value to ensure that Memgraph Lab stays connected to Memgraph
when running queries over 65 seconds.
Refer to the Memgraph Lab documentation for details on how to configure Memgraph Lab.
kube