|
| 1 | ++++ |
| 2 | +date = "2017-03-20T22:25:17+11:00" |
| 3 | +title = "HA Cluster Setup" |
| 4 | +weight = 5 |
| 5 | +type = "docs" |
| 6 | +[menu.main] |
| 7 | + parent = "installation" |
| 8 | ++++ |
| 9 | + |
| 10 | +You can run three Dgraph Alpha servers and three Dgraph Zero servers in a highly available cluster setup. For a highly available setup, start the Dgraph Zero server with `--replicas 3` flag, so that all data is replicated on three Alpha servers and forms one Alpha group. You can install a highly available cluster using: |
| 11 | +* [dgraph-ha.yaml](https://github.com/dgraph-io/dgraph/blob/main/contrib/config/kubernetes/dgraph-ha/dgraph-ha.yaml) file |
| 12 | +* Helm charts. |
| 13 | + |
| 14 | +### Install a highly available Dgraph cluster using YAML or Helm |
| 15 | + |
| 16 | +{{% tabs %}} {{< tab "YAML" >}} |
| 17 | +#### Before you begin: |
| 18 | + |
| 19 | +* Install [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/). |
| 20 | +* Ensure that you have a production-ready Kubernetes cluster with at least three worker nodes running in a cloud provider of your choice. |
| 21 | +* (Optional) To run Dgraph Alpha with TLS, see [TLS Configuration]({{< relref "tls-configuration.md" >}}). |
| 22 | + |
| 23 | +#### Installing a highly available Dgraph cluster |
| 24 | + |
| 25 | +1. Verify that you are able to access the nodes in the Kubernetes cluster: |
| 26 | + |
| 27 | + ```bash |
| 28 | + kubectl get nodes |
| 29 | + ``` |
| 30 | + |
| 31 | + An output similar to this appears: |
| 32 | + |
| 33 | + ```bash |
| 34 | + NAME STATUS ROLES AGE VERSION |
| 35 | + <aws-ip-hostname>.<region>.compute.internal Ready <none> 1m v1.15.11-eks-af3caf |
| 36 | + <aws-ip-hostname>.<region>.compute.internal Ready <none> 1m v1.15.11-eks-af3caf |
| 37 | + <aws-ip-hostname>.<region>.compute.internal Ready <none> 1m v1.15.11-eks-af3caf |
| 38 | + ``` |
| 39 | + After your Kubernetes cluster is up, you can use [dgraph-ha.yaml](https://github.com/dgraph-io/dgraph/blob/main/contrib/config/kubernetes/dgraph-ha/dgraph-ha.yaml) to start the cluster. |
| 40 | + |
| 41 | +1. Start a StatefulSet that creates Pods with `Zero`, `Alpha`, and `Ratel UI`: |
| 42 | + |
| 43 | + ```bash |
| 44 | + kubectl create --filename https://raw.githubusercontent.com/dgraph-io/dgraph/main/contrib/config/kubernetes/dgraph-ha/dgraph-ha.yaml |
| 45 | + ``` |
| 46 | + An output similar to this appears: |
| 47 | + |
| 48 | + ```bash |
| 49 | + service/dgraph-zero-public created |
| 50 | + service/dgraph-alpha-public created |
| 51 | + service/dgraph-ratel-public created |
| 52 | + service/dgraph-zero created |
| 53 | + service/dgraph-alpha created |
| 54 | + statefulset.apps/dgraph-zero created |
| 55 | + statefulset.apps/dgraph-alpha created |
| 56 | + deployment.apps/dgraph-ratel created |
| 57 | + ``` |
| 58 | +1. Confirm that the Pods were created successfully. |
| 59 | + |
| 60 | + ```bash |
| 61 | + kubectl get pods |
| 62 | + ``` |
| 63 | + An output similar to this appears: |
| 64 | + |
| 65 | + ```bash |
| 66 | + NAME READY STATUS RESTARTS AGE |
| 67 | + dgraph-alpha-0 1/1 Running 0 6m24s |
| 68 | + dgraph-alpha-1 1/1 Running 0 5m42s |
| 69 | + dgraph-alpha-2 1/1 Running 0 5m2s |
| 70 | + dgraph-ratel-<pod-id> 1/1 Running 0 6m23s |
| 71 | + dgraph-zero-0 1/1 Running 0 6m24s |
| 72 | + dgraph-zero-1 1/1 Running 0 5m41s |
| 73 | + dgraph-zero-2 1/1 Running 0 5m6s |
| 74 | + ``` |
| 75 | + You can check the logs for the Pod using `kubectl logs --follow <POD_NAME>`.. |
| 76 | + |
| 77 | +1. Port forward from your local machine to the Pod: |
| 78 | + |
| 79 | + ```bash |
| 80 | + kubectl port-forward service/dgraph-alpha-public 8080:8080 |
| 81 | + kubectl port-forward service/dgraph-ratel-public 8000:8000 |
| 82 | + ``` |
| 83 | +1. Go to `http://localhost:8000` to access Dgraph using the Ratel UI. |
| 84 | + |
| 85 | +{{% notice "note" %}} You can also access the service on its External IP address.{{% /notice %}} |
| 86 | + |
| 87 | +#### Deleting highly available Dgraph resources |
| 88 | + |
| 89 | +Delete all the resources using: |
| 90 | + |
| 91 | +```sh |
| 92 | +kubectl delete --filename https://raw.githubusercontent.com/dgraph-io/dgraph/main/contrib/config/kubernetes/dgraph-ha/dgraph-ha.yaml |
| 93 | +kubectl delete persistentvolumeclaims --selector app=dgraph-zero |
| 94 | +kubectl delete persistentvolumeclaims --selector app=dgraph-alpha |
| 95 | +``` |
| 96 | +{{< /tab >}} |
| 97 | +{{< tab "Helm" >}} |
| 98 | + |
| 99 | +#### Before you begin |
| 100 | + |
| 101 | +* Install [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/). |
| 102 | +* Ensure that you have a production-ready Kubernetes cluster with atleast three worker nodes running in a cloud provider of your choice. |
| 103 | +* Install [Helm](https://helm.sh/docs/intro/install/). |
| 104 | +* (Optional) To run Dgraph Alpha with TLS, see [TLS Configuration]({{< relref "tls-configuration.md" >}}). |
| 105 | + |
| 106 | +#### Installing a highly available Dgraph cluster using Helm |
| 107 | + |
| 108 | +1. Verify that you are able to access the nodes in the Kubernetes cluster: |
| 109 | + |
| 110 | + ```bash |
| 111 | + kubectl get nodes |
| 112 | + ``` |
| 113 | + |
| 114 | + An output similar to this appears: |
| 115 | + |
| 116 | + ```bash |
| 117 | + NAME STATUS ROLES AGE VERSION |
| 118 | + <aws-ip-hostname>.<region>.compute.internal Ready <none> 1m v1.15.11-eks-af3caf |
| 119 | + <aws-ip-hostname>.<region>.compute.internal Ready <none> 1m v1.15.11-eks-af3caf |
| 120 | + <aws-ip-hostname>.<region>.compute.internal Ready <none> 1m v1.15.11-eks-af3caf |
| 121 | + ``` |
| 122 | + After your Kubernetes cluster is up and running, you can use of the [Dgraph Helm chart](https://github.com/dgraph-io/charts/) to install a highly available Dgraph cluster |
| 123 | + |
| 124 | +1. Add the Dgraph helm repository:: |
| 125 | + |
| 126 | + ```bash |
| 127 | + helm repo add dgraph https://charts.dgraph.io |
| 128 | + ``` |
| 129 | +1. Install the chart with `<RELEASE-NAME>`: |
| 130 | + |
| 131 | + ```bash |
| 132 | + helm install <RELEASE-NAME> dgraph/dgraph |
| 133 | + ``` |
| 134 | + |
| 135 | + You can also specify the version using: |
| 136 | + ```bash |
| 137 | + helm install <RELEASE-NAME> dgraph/dgraph --set image.tag="{{< version >}}" |
| 138 | + ``` |
| 139 | + When configuring the Dgraph image tag, be careful not to use `latest` or `main` in a production environment. These tags may have the Dgraph version change, causing a mixed-version Dgraph cluster that can lead to an outage and potential data loss. |
| 140 | + |
| 141 | + An output similar to this appears: |
| 142 | + |
| 143 | + ```bash |
| 144 | + NAME: <RELEASE-NAME> |
| 145 | + LAST DEPLOYED: Wed Feb 1 21:26:32 2023 |
| 146 | + NAMESPACE: default |
| 147 | + STATUS: deployed |
| 148 | + REVISION: 1 |
| 149 | + TEST SUITE: None |
| 150 | + NOTES: |
| 151 | + 1. You have just deployed Dgraph, version 'v21.12.0'. |
| 152 | + |
| 153 | + For further information: |
| 154 | + * Documentation: https://dgraph.io/docs/ |
| 155 | + * Community and Issues: https://discuss.dgraph.io/ |
| 156 | + 2. Get the Dgraph Alpha HTTP/S endpoint by running these commands. |
| 157 | + export ALPHA_POD_NAME=$(kubectl get pods --namespace default --selector "statefulset.kubernetes.io/pod-name=<RELEASE-NAME>-dgraph-alpha-0,release=<RELEASE-NAME>-dgraph" --output jsonpath="{.items[0].metadata.name}") |
| 158 | + echo "Access Alpha HTTP/S using http://localhost:8080" |
| 159 | + kubectl --namespace default port-forward $ALPHA_POD_NAME 8080:8080 |
| 160 | + |
| 161 | + NOTE: Change "http://" to "https://" if TLS was added to the Ingress, Load Balancer, or Dgraph Alpha service. |
| 162 | + ``` |
| 163 | +1. Get the name of the Pods in the cluster using `kubectl get pods`: |
| 164 | + ```bash |
| 165 | + NAME READY STATUS RESTARTS AGE |
| 166 | + <RELEASE-NAME>-dgraph-alpha-0 1/1 Running 0 4m48s |
| 167 | + <RELEASE-NAME>-dgraph-alpha-1 1/1 Running 0 4m2s |
| 168 | + <RELEASE-NAME>-dgraph-alpha-2 1/1 Running 0 3m31s |
| 169 | + <RELEASE-NAME>-dgraph-zero-0 1/1 Running 0 4m48s |
| 170 | + <RELEASE-NAME>-dgraph-zero-1 1/1 Running 0 4m10s |
| 171 | + <RELEASE-NAME>-dgraph-zero-2 1/1 Running 0 3m50s |
| 172 | +
|
| 173 | +1. Get the Dgraph Alpha HTTP/S endpoint by running these commands: |
| 174 | + ```bash |
| 175 | + export ALPHA_POD_NAME=$(kubectl get pods --namespace default --selector "statefulset.kubernetes.io/pod-name=<RELEASE-NAME>-dgraph-alpha-0,release=<RELEASE-NAME>-dgraph" --output jsonpath="{.items[0].metadata.name}") |
| 176 | + echo "Access Alpha HTTP/S using http://localhost:8080" |
| 177 | + kubectl --namespace default port-forward $ALPHA_POD_NAME 8080:8080 |
| 178 | + ``` |
| 179 | +#### Deleting the resources from the cluster |
| 180 | +
|
| 181 | +1. Delete the Helm deployment using: |
| 182 | +
|
| 183 | + ```sh |
| 184 | + helm delete my-release |
| 185 | + ``` |
| 186 | +2. Delete associated Persistent Volume Claims: |
| 187 | +
|
| 188 | + ```sh |
| 189 | + kubectl delete pvc --selector release=my-release |
| 190 | + ``` |
| 191 | +{{< /tab >}} |
| 192 | +{{% /tabs %}} |
| 193 | +
|
| 194 | +
|
| 195 | +### Dgraph configuration files |
| 196 | +
|
| 197 | +You can create a Dgraph [Config]({{< relref "cli/config" >}}) files for Alpha server and Zero server with Helm chart configuration values, `<MY-CONFIG-VALUES>`. For more information about the values, see the latest [configuration settings](https://github.com/dgraph-io/charts/blob/master/charts/dgraph/README.md#configuration). |
| 198 | +
|
| 199 | +1. Open an editor of your choice and create a config file named `<MY-CONFIG-VALUES>.yaml`: |
| 200 | +
|
| 201 | +```yaml |
| 202 | +# <MY-CONFIG-VALUES>.yaml |
| 203 | +alpha: |
| 204 | + configFile: |
| 205 | + config.yaml: | |
| 206 | + alsologtostderr: true |
| 207 | + badger: |
| 208 | + compression_level: 3 |
| 209 | + tables: mmap |
| 210 | + vlog: mmap |
| 211 | + postings: /dgraph/data/p |
| 212 | + wal: /dgraph/data/w |
| 213 | +zero: |
| 214 | + configFile: |
| 215 | + config.yaml: | |
| 216 | + alsologtostderr: true |
| 217 | + wal: /dgraph/data/zw |
| 218 | +``` |
| 219 | +
|
| 220 | +2. Change to the director in which you created `<MY-CONFIG-VALUES>`.yaml and then install with Alpha and Zero configuration using: |
| 221 | +
|
| 222 | +```sh |
| 223 | +helm install <RELEASE-NAME> dgraph/dgraph --values <MY-CONFIG-VALUES>.yaml |
| 224 | +``` |
| 225 | +
|
| 226 | +### Exposing Alpha and Ratel Services |
| 227 | +
|
| 228 | +By default Zero and Alpha services are exposed only within the Kubernetes cluster as |
| 229 | +Kubernetes service type `ClusterIP`. |
| 230 | +
|
| 231 | +In order to expose the Alpha service and Ratel service publicly you can use Kubernetes service type `LoadBalancer` or an Ingress resource. |
| 232 | +
|
| 233 | +{{% tabs %}} {{< tab "LoadBalancer" >}} |
| 234 | +
|
| 235 | +##### Public Internet |
| 236 | +
|
| 237 | +To use an external load balancer, set the service type to `LoadBalancer`. |
| 238 | +
|
| 239 | +{{% notice "note" %}}For security purposes we recommend limiting access to any public endpoints, such as using a white list.{{% /notice %}} |
| 240 | +
|
| 241 | +1. To expose Alpha service to the Internet use: |
| 242 | +
|
| 243 | +```sh |
| 244 | +helm install <RELEASE-NAME> dgraph/dgraph --set alpha.service.type="LoadBalancer" |
| 245 | +``` |
| 246 | +
|
| 247 | +2. To expose Alpha and Ratel services to the Internet use: |
| 248 | +
|
| 249 | +```sh |
| 250 | +helm install <RELEASE-NAME> dgraph/dgraph --set alpha.service.type="LoadBalancer" --set ratel.service.type="LoadBalancer" |
| 251 | +``` |
| 252 | +
|
| 253 | +##### Private Internal Network |
| 254 | +
|
| 255 | +An external load balancer can be configured to face internally to a private subnet rather the public Internet. This way it can be accessed securely by clients on the same network, through a VPN, or from a jump server. In Kubernetes, this is often configured through service annotations by the provider. Here's a small list of annotations from cloud providers: |
| 256 | +
|
| 257 | +|Provider | Documentation Reference | Annotation | |
| 258 | +|------------|---------------------------|------------| |
| 259 | +|AWS |[Amazon EKS: Load Balancing](https://docs.aws.amazon.com/eks/latest/userguide/load-balancing.html)|`service.beta.kubernetes.io/aws-load-balancer-internal: "true"`| |
| 260 | +|Azure |[AKS: Internal Load Balancer](https://docs.microsoft.com/azure/aks/internal-lb)|`service.beta.kubernetes.io/azure-load-balancer-internal: "true"`| |
| 261 | +|Google Cloud|[GKE: Internal Load Balancing](https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing)|`cloud.google.com/load-balancer-type: "Internal"`| |
| 262 | +
|
| 263 | +
|
| 264 | +As an example, using Amazon [EKS](https://aws.amazon.com/eks/) as the provider. |
| 265 | +
|
| 266 | +1. Create a Helm chart configuration values file `<MY-CONFIG-VALUES>`.yaml file: |
| 267 | +
|
| 268 | +```yaml |
| 269 | +# <MY-CONFIG-VALUES>.yaml |
| 270 | +alpha: |
| 271 | + service: |
| 272 | + type: LoadBalancer |
| 273 | + annotations: |
| 274 | + service.beta.kubernetes.io/aws-load-balancer-internal: "true" |
| 275 | +ratel: |
| 276 | + service: |
| 277 | + type: LoadBalancer |
| 278 | + annotations: |
| 279 | + service.beta.kubernetes.io/aws-load-balancer-internal: "true" |
| 280 | +``` |
| 281 | +
|
| 282 | +1. To expose Alpha and Ratel services privately, use: |
| 283 | +
|
| 284 | +```sh |
| 285 | +helm install <RELEASE-NAME> dgraph/dgraph --values <MY-CONFIG-VALUES>.yaml |
| 286 | +``` |
| 287 | +{{% /tab %}} |
| 288 | +{{% tab "Ingress Resource" %}} |
| 289 | +
|
| 290 | +You can expose Alpha and Ratel using an [ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) resource that can route traffic to service resources. Before using this option you may need to install an [ingress controller](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/) first, as is the case with [AKS](https://docs.microsoft.com/azure/aks/) and [EKS](https://aws.amazon.com/eks/), while in the case of [GKE](https://cloud.google.com/kubernetes-engine), this comes bundled with a default ingress controller. When routing traffic based on the `hostname`, you may want to integrate an addon like [ExternalDNS](https://github.com/kubernetes-sigs/external-dns) so that DNS records can be registered automatically when deploying Dgraph. |
| 291 | +
|
| 292 | +As an example, you can configure a single ingress resource that uses [ingress-nginx](https://github.com/kubernetes/ingress-nginx) for Alpha and Ratel services. |
| 293 | +
|
| 294 | +1. Create a Helm chart configuration values file, `<MY-CONFIG-VALUES>`.yaml file: |
| 295 | +
|
| 296 | +```yaml |
| 297 | +# <MY-CONFIG-VALUES>.yaml |
| 298 | +global: |
| 299 | + ingress: |
| 300 | + enabled: false |
| 301 | + annotations: |
| 302 | + kubernetes.io/ingress.class: nginx |
| 303 | + ratel_hostname: "ratel.<my-domain-name>" |
| 304 | + alpha_hostname: "alpha.<my-domain-name>" |
| 305 | +``` |
| 306 | +
|
| 307 | +2. To expose Alpha and Ratel services through an ingress: |
| 308 | +
|
| 309 | +```sh |
| 310 | +helm install <RELEASE-NAME> dgraph/dgraph --values <MY-CONFIG-VALUES>.yaml |
| 311 | +``` |
| 312 | +
|
| 313 | +You can run `kubectl get ingress` to see the status and access these through their hostname, such as `http://alpha.<my-domain-name>` and `http://ratel.<my-domain-name>` |
| 314 | +
|
| 315 | +
|
| 316 | +{{% notice "tip" %}}Ingress controllers will likely have an option to configure access for private internal networks. Consult documentation from the ingress controller provider for further information.{{% /notice %}} |
| 317 | +{{% /tab %}}{{% /tabs %}} |
| 318 | +
|
| 319 | +### Upgrading the Helm chart |
| 320 | +
|
| 321 | +You can update your cluster configuration by updating the configuration of the |
| 322 | +Helm chart. Dgraph is a stateful database that requires some attention on |
| 323 | +upgrading the configuration carefully in order to update your cluster to your |
| 324 | +desired configuration. |
| 325 | +
|
| 326 | +In general, you can use [`helm upgrade`][helm-upgrade] to update the |
| 327 | +configuration values of the cluster. Depending on your change, you may need to |
| 328 | +upgrade the configuration in multiple steps. |
| 329 | +
|
| 330 | +[helm-upgrade]: https://helm.sh/docs/helm/helm_upgrade/ |
| 331 | +
|
| 332 | +To upgrade to an HA cluster setup: |
| 333 | +
|
| 334 | +1. Ensure that the shard replication setting is more than one and `zero.shardReplicaCount`. For example, set the shard replica flag on the Zero node group to 3,`zero.shardReplicaCount=3`. |
| 335 | +2. Run the Helm upgrade command to restart the Zero node group: |
| 336 | + ```sh |
| 337 | + helm upgrade <RELEASE-NAME> dgraph/dgraph [options] |
| 338 | + ``` |
| 339 | +3. Set the Alpha replica count flag. For example: `alpha.replicaCount=3`. |
| 340 | +4. Run the Helm upgrade command again: |
| 341 | + ```sh |
| 342 | + helm upgrade <RELEASE-NAME> dgraph/dgraph [options] |
| 343 | + ``` |
| 344 | +
|
| 345 | +
|
0 commit comments