From 652632e71051614c49d153b9e3deb12fba5eaca4 Mon Sep 17 00:00:00 2001 From: Mattias Gees Date: Fri, 7 Sep 2018 14:14:44 +0100 Subject: [PATCH 1/2] Add HPA docs Signed-off-by: Mattias Gees --- docs/examples/horizontal-pod-autoscaling.rst | 214 +++++++++++++++++++ docs/spelling_wordlist.txt | 5 + 2 files changed, 219 insertions(+) create mode 100644 docs/examples/horizontal-pod-autoscaling.rst diff --git a/docs/examples/horizontal-pod-autoscaling.rst b/docs/examples/horizontal-pod-autoscaling.rst new file mode 100644 index 0000000000..aa6b34fcb9 --- /dev/null +++ b/docs/examples/horizontal-pod-autoscaling.rst @@ -0,0 +1,214 @@ +Horizontal Pod Autoscaling +-------------------------- + +This will give an example setup of `HPA `_. +We are using `Prometheus _` and the `Prometheus-adapter `_. + +Prerequisite +~~~~~~~~~~~~ + +Make sure `HELM `_ is `activated `_ on the Tarmak cluster. +You also need to make sure you can connect to the cluster with your HELM install. + +.. code-block:: bash + + helm version + Client: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"} + Server: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"} + + +Setup +~~~~~ + +Prometheus +++++++++++ + +You need Prometheus to scrape and store metrics from applications in its +time series database. These metrics will be used by the HPA to decide if it +has to scale the application. You can use an already running Prometheus in +your environment or opt to set one up with the following steps. + +.. warning:: + This will only setup a simple Prometheus. If you want to use Alertmanager and other + more advanced options, take a look at the `kube-prometheus `_ chart. + + +First activate the HELM repository of Prometheus + +.. code-block:: bash + + helm repo add coreos https://s3-eu-west-1.amazonaws.com/coreos-charts/stable/ + + +Now install the Prometheus operator which can be used to install and manage +Prometheus, Alertmanager and configure ServiceMonitors. + +.. code-block:: bash + + helm install coreos/prometheus-operator --name prometheus-operator --namespace application-monitoring + +After installing the Prometheus operator you can install Prometheus. To accomplish +this you use an yaml values file. This yaml file defines you want 2 Prometheus +pods running, keep data for 14 days and want to have a persistent volume +of 20 GB per Prometheus pod. You can tweak the Prometheus install further +by following the official `documentation `__. + +.. code-block:: yaml + + replicaCount: 2 # You want HA + retention: 336h # 14 days of retention + storageSpec: + volumeClaimTemplate: + spec: + class: gp2 + resources: + requests: + storage: 20Gi # Prefarably use bigger for iops (eg 100Gi) + podAntiAffinity: hard # You want to be 100% sure you don't land on the same node with our prometheus instances + serviceMonitorsSelector: + matchExpressions: + - key: prometheus + operator: Exists + +Save the yaml as a ``prometheus.yaml`` and run the following command: + +.. code-block:: bash + + helm install coreos/prometheus --name prometheus-applications --namespace application-monitoring -f prometheus.yaml + + +Prometheus Adapter +++++++++++++++++++ + +You need to create a CA and SSL cert to validate your APIService with Kubernetes. + +.. code-block:: bash + + export PURPOSE=server + openssl req -x509 -sha256 -new -nodes -days 365 -newkey rsa:2048 -keyout ${PURPOSE}-ca.key -out ${PURPOSE}-ca.crt -subj "/CN=ca" + echo '{"signing":{"default":{"expiry":"43800h","usages":["signing","key encipherment","'${PURPOSE}'"]}}}' > "${PURPOSE}-ca-config.json" + + export SERVICE_NAME=prometheus-adapter-prometheus-adapter + export ALT_NAMES='"prometheus-adapter-prometheus-adapter.application-monitoring","prometheus-adapter-prometheus-adapter.application-monitoring.svc"' + echo '{"CN":"'${SERVICE_NAME}'","hosts":['${ALT_NAMES}'],"key":{"algo":"rsa","size":2048}}' | cfssl gencert -ca=server-ca.crt -ca-key=server-ca.key -config=server-ca-config.json - | cfssljson -bare apiserver + +Now create an ``prometheus-adapater.yaml`` with the following content: + +.. code-block:: yaml + + tls: + enable: true + ca: |- + + key: |- + + certificate: |- + + + # Change URL and port if you setup your own Prometheus server. + prometheus: + url: http://prometheus-applications.application-monitoring.svc + port: 9090 + + replicas: 2 + +Install the Prometheus Adapter: + +.. code-block:: bash + + helm repo add incubator https://kubernetes-charts-incubator.storage.googleapis.com/ + helm install incubator/prometheus-adapter --name prometheus-adapter --namespace application-monitoring -f prometheus-adapter.yaml + + +Now you can test if it works by running the following command against your +Kubernetes cluster. + +.. code-block:: bash + + kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1 + {"kind":"APIResourceList","apiVersion":"v1","groupVersion":"custom.metrics.k8s.io/v1beta1","resources":[]} + +Usage +~~~~~ + +To start scaling based on custom-metrics, you need to have an application +or Prometheus exporter that exposes metrics in Prometheus format. Another +requisite is to have an Kubernetes endpoint for your application. That +endpoint will be used to discover your Pods. If your application meets +these requirements, you can add a ``ServiceMonitor`` to start monitoring +your application with Prometheus. + +.. code-block:: yaml + + apiVersion: monitoring.coreos.com/v1 + kind: ServiceMonitor + metadata: + name: + namespace: + labels: + prometheus: prometheus-applications + spec: + endpoints: + - interval: 30s + targetport: + path: /metrics + namespaceSelector: + matchNames: + - + selector: + matchLabels: + : + +When adding the following ``ServiceMonitor``, make sure to keep ``prometheus`` +as an key in labels, this is how Prometheus discovers the different +ServiceMonitors. + +After applying the ``ServiceMonitor``, Prometheus should start discovering +all your application pods and start to monitor them. + +Now you can find the correct metric and you can get it out of the custom.metrics +API endpoint. + +.. code-block:: bash + + kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1 | jq + kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/requestcount" | jq + kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1/namespaces/default/service/example/requestcount | jq + +When you found the correct metric to scale on, you can create your +``HorizontalPodAutoscaler``. + +.. code-block:: yaml + + kind: HorizontalPodAutoscaler + apiVersion: autoscaling/v2beta1 + metadata: + name: example + namespace: default + spec: + scaleTargetRef: + apiVersion: apps/v1beta2 + kind: Deployment + name: example + minReplicas: 2 + maxReplicas: 4 + metrics: + - type: Object + object: + target: + kind: Service + name: example + metricName: requestcount + targetValue: 30 + +Now you watch the horizontal pod autoscaler: + +.. code-block:: bash + + kubectl describe hpa example + + +More examples can be found in the kubernetes `documentation `__. + +.. warning:: + Certainly take a look a types ``Object`` and ``Pods`` for HPA based on custom-metrics. diff --git a/docs/spelling_wordlist.txt b/docs/spelling_wordlist.txt index 32990b6a8b..689aaa66d4 100644 --- a/docs/spelling_wordlist.txt +++ b/docs/spelling_wordlist.txt @@ -94,3 +94,8 @@ unsealer username wrt yaml +gid +Alertmanager +Adapter +autoscale +ServiceMonitors From d7184f1a61769baf0b4854201c2406470b3c7079 Mon Sep 17 00:00:00 2001 From: Mattias Gees Date: Mon, 10 Sep 2018 10:15:24 +0100 Subject: [PATCH 2/2] Fix docs Signed-off-by: Mattias Gees --- docs/examples/horizontal-pod-autoscaling.rst | 61 +++++++++++--------- 1 file changed, 33 insertions(+), 28 deletions(-) diff --git a/docs/examples/horizontal-pod-autoscaling.rst b/docs/examples/horizontal-pod-autoscaling.rst index aa6b34fcb9..d8c61b7510 100644 --- a/docs/examples/horizontal-pod-autoscaling.rst +++ b/docs/examples/horizontal-pod-autoscaling.rst @@ -25,7 +25,7 @@ Prometheus You need Prometheus to scrape and store metrics from applications in its time series database. These metrics will be used by the HPA to decide if it -has to scale the application. You can use an already running Prometheus in +has to scale the application. You can use an already running Prometheus in your environment or opt to set one up with the following steps. .. warning:: @@ -48,7 +48,7 @@ Prometheus, Alertmanager and configure ServiceMonitors. helm install coreos/prometheus-operator --name prometheus-operator --namespace application-monitoring After installing the Prometheus operator you can install Prometheus. To accomplish -this you use an yaml values file. This yaml file defines you want 2 Prometheus +this you use a yaml values file. This yaml file defines you want 2 Prometheus pods running, keep data for 14 days and want to have a persistent volume of 20 GB per Prometheus pod. You can tweak the Prometheus install further by following the official `documentation `__. @@ -88,10 +88,15 @@ You need to create a CA and SSL cert to validate your APIService with Kubernetes openssl req -x509 -sha256 -new -nodes -days 365 -newkey rsa:2048 -keyout ${PURPOSE}-ca.key -out ${PURPOSE}-ca.crt -subj "/CN=ca" echo '{"signing":{"default":{"expiry":"43800h","usages":["signing","key encipherment","'${PURPOSE}'"]}}}' > "${PURPOSE}-ca-config.json" - export SERVICE_NAME=prometheus-adapter-prometheus-adapter - export ALT_NAMES='"prometheus-adapter-prometheus-adapter.application-monitoring","prometheus-adapter-prometheus-adapter.application-monitoring.svc"' + export SERVICE_NAME=prometheus-adapter + export ALT_NAMES='"prometheus-adapter.application-monitoring","prometheus-adapter.application-monitoring.svc"' echo '{"CN":"'${SERVICE_NAME}'","hosts":['${ALT_NAMES}'],"key":{"algo":"rsa","size":2048}}' | cfssl gencert -ca=server-ca.crt -ca-key=server-ca.key -config=server-ca-config.json - | cfssljson -bare apiserver + +.. warning:: + Make sure the ``SERVICE_NAME`` and ``ALT_NAMES`` match your application release + name and namespace where it is deployed. + Now create an ``prometheus-adapater.yaml`` with the following content: .. code-block:: yaml @@ -99,11 +104,11 @@ Now create an ``prometheus-adapater.yaml`` with the following content: tls: enable: true ca: |- - + key: |- - + certificate: |- - + # Change URL and port if you setup your own Prometheus server. prometheus: @@ -117,15 +122,16 @@ Install the Prometheus Adapter: .. code-block:: bash helm repo add incubator https://kubernetes-charts-incubator.storage.googleapis.com/ - helm install incubator/prometheus-adapter --name prometheus-adapter --namespace application-monitoring -f prometheus-adapter.yaml + helm install stable/prometheus-adapter --name prometheus-adapter --namespace application-monitoring -f prometheus-adapter.yaml -Now you can test if it works by running the following command against your +You can test if HPA works by running the following command against your Kubernetes cluster. .. code-block:: bash kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1 + {"kind":"APIResourceList","apiVersion":"v1","groupVersion":"custom.metrics.k8s.io/v1beta1","resources":[]} Usage @@ -144,7 +150,7 @@ your application with Prometheus. kind: ServiceMonitor metadata: name: - namespace: + namespace: application-monitoring labels: prometheus: prometheus-applications spec: @@ -159,23 +165,22 @@ your application with Prometheus. matchLabels: : -When adding the following ``ServiceMonitor``, make sure to keep ``prometheus`` -as an key in labels, this is how Prometheus discovers the different -ServiceMonitors. +When adding the ``ServiceMonitor``, make sure to keep ``prometheus`` as an key +in labels, that is how Prometheus discovers the different ServiceMonitors. +The ``ServiceMonitor`` has to be deployed in the same namespace as your Prometheus. After applying the ``ServiceMonitor``, Prometheus should start discovering all your application pods and start to monitor them. -Now you can find the correct metric and you can get it out of the custom.metrics -API endpoint. +You can find the correct metric by querying the custom.metrics API endpoint. .. code-block:: bash kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1 | jq - kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/requestcount" | jq - kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1/namespaces/default/service/example/requestcount | jq + kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces//pods/*/" | jq + kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1/namespaces//service// | jq -When you found the correct metric to scale on, you can create your +When you found the correct metric to scale on, you can create your ``HorizontalPodAutoscaler``. .. code-block:: yaml @@ -183,8 +188,8 @@ When you found the correct metric to scale on, you can create your kind: HorizontalPodAutoscaler apiVersion: autoscaling/v2beta1 metadata: - name: example - namespace: default + name: + namespace: spec: scaleTargetRef: apiVersion: apps/v1beta2 @@ -193,15 +198,12 @@ When you found the correct metric to scale on, you can create your minReplicas: 2 maxReplicas: 4 metrics: - - type: Object - object: - target: - kind: Service - name: example - metricName: requestcount - targetValue: 30 + - type: Pods + pods: + metricName: + targetAverageValue: -Now you watch the horizontal pod autoscaler: +Watch the horizontal pod autoscaler: .. code-block:: bash @@ -212,3 +214,6 @@ More examples can be found in the kubernetes `documentation `__.