You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Mar 27, 2023. It is now read-only.
Minikube is a tool for running a single-node kubernetes cluster inside of a virtual machine. It is a popular tool for developing Kubernetes applications locally.
3
+
Minikube is a tool for running a single-node Kubernetes cluster inside of a virtual machine. It is a popular tool for developing Kubernetes applications locally.
4
4
5
5
This topic will cover using `minikube` to set up the project Kubernetes locally.
6
6
7
-
I'll be following [this guide](https://medium.com/@markgituma/kubernetes-local-to-production-with-django-1-introduction-d73adc9ce4b4) to get started.
7
+
::: tip Goal
8
+
By the end of this guide, you will be able to:
9
+
10
+
1. Navigate to `http://minikube.local` in your browser and interact with the application running in **minikube** in the same way that you would with the application running using **docker-compose** for local development.
11
+
12
+
1. Run **Cypress** tests against the application running in **minikube** to verify that everything is working correctly.
13
+
:::
14
+
15
+
I'll be following [this great guide](https://medium.com/@markgituma/kubernetes-local-to-production-with-django-1-introduction-d73adc9ce4b4) to get started, making changes and additions where necessary.
8
16
9
17
## Getting started
10
18
@@ -24,14 +32,31 @@ I'll be using the following alias to use `kubectl`:
24
32
alias k='kubectl'
25
33
```
26
34
27
-
## Build the Django server Deployment
35
+
## Building Images
36
+
37
+
We will need to build two images from our code:
38
+
39
+
1. The `backend` image that will run the Django server, Django Channels, Celery and Beat
40
+
1. The `frontend` image that will contains nginx for serving our Quasar frontend application.
28
41
29
-
We need to build our `backend` image. In order for minikube to be able to use the image, we can set our docker client to point to the minikube docker host. To do this, run the following command:
42
+
Both of these images will need environment variables. We will use `docker-compose` to easily manage the building and environment variable management. Read [this article](https://vsupalov.com/docker-arg-env-variable-guide/) for more information. You don't absolutely have to user docker-compose to build the images, but it should keep things straightforward and easy to understand.
43
+
44
+
Remember that that the docker CLI, like `kubectl`, send requests to a REST API. When we run `minikube start`, this configures `kubectl` to send commands to the Kubernetes API server that is running inside of the minikube virtual machine. Similarly, we need to tell our docker CLI that we want to send API calls that the docker CLI command makes to the docker daemon running in the minikube VM, **not** the docker daemon on our local machine (even though the files from which we build our images are on our local machine and not on the minikube VM's file system). We can configure our docker CLI to point to the minikube VM with the following command:
30
45
31
46
```
32
47
eval $(minikube docker-env)
33
48
```
34
49
50
+
Now run `docker ps` and you will see many different containers that Kubernetes uses internally.
51
+
52
+
To point the docker CLI back at your local docker daemon, run:
53
+
54
+
```
55
+
eval $(minikube docker-env -u)
56
+
```
57
+
58
+
Let's look at what the command is doing:
59
+
35
60
`$(minikube docker-env)` results in the following output:
Notice that the `DOCKER_HOST` is pointing to the minikube VM on docker's default port `2376`.
70
+
Notice that the `DOCKER_HOST` is pointing to the minikube VM on docker's default port `2376`.`eval` executes these commands, setting the environment variables in the *current shell* by using `export`. If you switch to another shell, you will need to rerun this command if you want to run docker commands against minikube's docker daemon.
46
71
47
72
With these environment variables set, let's build the Django container image with the following command:
Here's the `backend` service defined in `compose/minikube.yml`:
79
+
80
+
```yml
81
+
backend:
82
+
image: backend:1
83
+
build:
84
+
context: ../backend/
85
+
dockerfile: scripts/dev/Dockerfile
51
86
```
52
87
53
-
**`deployment.yml`**
88
+
**`kubernetes/django/deployment.yml`**
54
89
55
90
```yml
56
91
apiVersion: apps/v1
@@ -71,93 +106,26 @@ spec:
71
106
spec:
72
107
containers:
73
108
- name: django-backend-container
74
-
image: localhost:5000/backend
75
-
command: ["./manage.py", "runserver"]
76
-
ports:
77
-
- containerPort: 8000
78
-
```
79
-
80
-
Let's send this file to Kubernete API server with the following command:
81
-
82
-
```
83
-
kubectl apply -f kubernetes/django/deployment.yml
84
-
```
85
-
86
-
Your pod for the deployment should be starting. Inspect the pods with `k get pods`. If there is an error with container startup, you might see something like this:
87
-
88
-
```
89
-
k get pods
90
-
NAME READY STATUS RESTARTS AGE
91
-
django-backend-dd798db99-hkv2p 0/1 Error 0 3s
92
-
```
93
-
94
-
If this is the case, inspect the logs of the container with the following command:
95
-
96
-
I have intentionally cause the container to fail by not providing a `SECRET_KEY` environment variable (this is something that Django needs in order to start).
97
-
98
-
Let's inspect the container logs to confirm this:
99
-
100
-
```bash
101
-
k logs django-backend-dd798db99-hkv2p
102
-
Traceback (most recent call last):
103
-
File "./manage.py", line 16, in<module>
104
-
execute_from_command_line(sys.argv)
105
-
File "/usr/local/lib/python3.7/site-packages/django/core/management/__init__.py", line 381, in execute_from_command_line
106
-
utility.execute()
107
-
File "/usr/local/lib/python3.7/site-packages/django/core/management/__init__.py", line 375, in execute
File "/usr/local/lib/python3.7/site-packages/django/core/management/base.py", line 323, in run_from_argv
110
-
self.execute(*args, **cmd_options)
111
-
File "/usr/local/lib/python3.7/site-packages/django/core/management/commands/runserver.py", line 60, in execute
112
-
super().execute(*args, **options)
113
-
File "/usr/local/lib/python3.7/site-packages/django/core/management/base.py", line 364, in execute
114
-
output = self.handle(*args, **options)
115
-
File "/usr/local/lib/python3.7/site-packages/django/core/management/commands/runserver.py", line 67, in handle
116
-
if not settings.DEBUG and not settings.ALLOWED_HOSTS:
117
-
File "/usr/local/lib/python3.7/site-packages/django/conf/__init__.py", line 79, in __getattr__
118
-
self._setup(name)
119
-
File "/usr/local/lib/python3.7/site-packages/django/conf/__init__.py", line 66, in _setup
120
-
self._wrapped = Settings(settings_module)
121
-
File "/usr/local/lib/python3.7/site-packages/django/conf/__init__.py", line 176, in __init__
122
-
raise ImproperlyConfigured("The SECRET_KEY setting must not be empty.")
123
-
django.core.exceptions.ImproperlyConfigured: The SECRET_KEY setting must not be empty.
124
-
```
125
-
126
-
We could either provide a fallback value in the Django settings (which would require rebuilding the image), or we could add an environment variable to the container definition in the Pod `spec`in`deployment.yml`:
127
-
128
-
```yml
129
-
spec:
130
-
containers:
131
-
- name: backend
132
109
imagePullPolicy: IfNotPresent
133
-
image: backend:latest
110
+
image: backend:1
134
111
command: ["./manage.py", "runserver"]
135
112
ports:
136
113
- containerPort: 8000
137
-
env:
138
-
- name: SECRET_KEY
139
-
value: "my-secret-key"
140
114
```
141
115
142
-
This should work, but we will still see errors in logs because we Django will attempt to establish a connection with the Postgres database which we will be setting up next.
116
+
::: warning No environment variables
117
+
**Note***: the pod template in this deployment definition does not have any environment variables. We will need to add environment variables for sensitive information such as the Postgres username and password. We will add these shortly
118
+
:::
143
119
144
-
We can hit our public facing `hello-world` endpoint which should serve as a nice health check forthe Django container.
120
+
There is one line in the above resource definition that makes everything work with minikube and the docker images we have just built: `imagePullPolicy: IfNotPresent`. This line tells Kubernetes to pull the image (from Docker Hub, or another registry if specified) **only** if the image is not present locally. If we didn't set the `imagePullPolicy` to `IfNotPresent`, Kubernetes would try to pull the image from docker hub, which would probably fail, resulting in an `ErrImagePull`.
145
121
146
-
Let's test this endpoint with `curl`. We haven't set up a Kubernetes `Service` yet, so we will have to curl the Django application from within the cluster. We can do this with:
122
+
Let's send this file to the minikube Kubernete API server with the following command:
147
123
148
124
```
149
-
k exec django-backend-757b5944d8-htssm -- curl -s http://172.17.0.5/api/hello-world/
150
-
```
151
-
152
-
This gives us:
153
-
154
-
```
155
-
command terminated with exit code 7
125
+
kubectl apply -f kubernetes/django/deployment.yml
156
126
```
157
127
158
-
Our container has started, but the database connection has prevented the Django process from starting. In our Pod logs, we can see that no request have been received and the Django application is not listening on `0.0.0.0:8000`.
159
-
160
-
Let's come back to this once we have set up our Postgres database.
128
+
Your pod for the deployment should be starting. Inspect the pods with `k get pods`.
161
129
162
130
## Postgres
163
131
@@ -182,55 +150,6 @@ spec:
182
150
path: /data/postgres-pv
183
151
```
184
152
185
-
::: warning storageClassName
186
-
Clicking on the `storageClassName` gave a 404 error
187
-
:::
188
-
189
-
::: warning Authentication failure
190
-
Message with postgres authentication failure
191
-
:::
192
-
193
-
After changing the `postgres` user password, the migrations are able to run successfully:
194
-
195
-
```bash
196
-
k exec django-784d668c8b-9gbf7 -it -- ./manage.py migrat
197
-
e
198
-
loading minikube settings...
199
-
Operations to perform:
200
-
Apply all migrations: accounts, admin, auth, contenttypes, sessions, social_django
201
-
Running migrations:
202
-
Applying contenttypes.0001_initial... OK
203
-
Applying contenttypes.0002_remove_content_type_name... OK
204
-
Applying auth.0001_initial... OK
205
-
Applying auth.0002_alter_permission_name_max_length... OK
206
-
Applying auth.0003_alter_user_email_max_length... OK
207
-
Applying auth.0004_alter_user_username_opts... OK
208
-
Applying auth.0005_alter_user_last_login_null... OK
209
-
Applying auth.0006_require_contenttypes_0002... OK
210
-
Applying auth.0007_alter_validators_add_error_messages... OK
211
-
Applying auth.0008_alter_user_username_max_length... OK
212
-
Applying auth.0009_alter_user_last_name_max_length... OK
213
-
Applying auth.0010_alter_group_name_max_length... OK
214
-
Applying auth.0011_update_proxy_permissions... OK
215
-
Applying accounts.0001_initial... OK
216
-
Applying admin.0001_initial... OK
217
-
Applying admin.0002_logentry_remove_auto_add... OK
218
-
Applying admin.0003_logentry_add_action_flag_choices... OK
219
-
Applying sessions.0001_initial... OK
220
-
Applying social_django.0001_initial... OK
221
-
Applying social_django.0002_add_related_name... OK
222
-
Applying social_django.0003_alter_email_max_length... OK
223
-
Applying social_django.0004_auto_20160423_0400... OK
224
-
Applying social_django.0005_auto_20160727_2333... OK
225
-
Applying social_django.0006_partial... OK
226
-
Applying social_django.0007_code_timestamp... OK
227
-
Applying social_django.0008_partial_timestamp... OK
228
-
```
229
-
230
-
::: warning Try this again
231
-
Try this again with a clean version of minikube and using the Secrets resource.
232
-
:::
233
-
234
153
## Secrets
235
154
236
155
Let's use base64 encoding to define a username and password for our Postgres username and password:
@@ -239,7 +158,7 @@ Let's use base64 encoding to define a username and password for our Postgres use
239
158
echo -n "my-string" | base64
240
159
```
241
160
242
-
I initially started the postgres container with a password set by environment variable. This may have set data in the
161
+
243
162
244
163
::: tip kubectl cheatsheet from kubernetes documentation
@@ -292,25 +211,100 @@ Make sure that your current shell has the correct environment variables set for
292
211
eval $(minikube docker-env)
293
212
```
294
213
295
-
296
-
297
214
We can pass the environment variables needed during the build process with `ARG` and `ENV`.
298
215
299
216
For `DOMAIN_NAME`, want to use an address that will point to the minikube Kubernetes cluster. Since the IP might change, we can set this to a named domain such as `test.dev`, and add a line to `/etc/hosts` that will point `test.dev` to the minikube IP. Then, we will need to setup a ingress to point `test.dev` to our `kubernetes-django-service` service.
With the Ingress enabled, we can add an `Ingress` resource:
309
229
230
+
```yml
231
+
apiVersion: extensions/v1beta1
232
+
kind: Ingress
233
+
metadata:
234
+
name: ingress-test
235
+
spec:
236
+
rules:
237
+
- host: minikube.local
238
+
http:
239
+
paths:
240
+
- path: /api/
241
+
backend:
242
+
serviceName: kubernetes-django-service
243
+
servicePort: 8000
244
+
- path: /admin/
245
+
backend:
246
+
serviceName: kubernetes-django-service
247
+
servicePort: 8000
248
+
- path: /static/
249
+
backend:
250
+
serviceName: kubernetes-django-service
251
+
servicePort: 8000
252
+
- path: /
253
+
backend:
254
+
serviceName: kubernetes-frontend-service
255
+
servicePort: 80
310
256
```
311
-
minikube addons enable ingress
257
+
258
+
Also, we need to add an entry to `/etc/hosts` so that requests to `minikube.local` will be forwarded to the `minikube ip`:
259
+
260
+
```sh
261
+
192.168.99.106 minikube.local
312
262
```
313
263
264
+
## Health Checks
265
+
266
+
We can add readiness and liveness checks for Django backend container. Liveness will check that the container has not crashed. Readiness will check that the container is ready to accept traffic by checking that postgres and redis are ready to accept connections.
267
+
268
+
Here are the checks in the `container` spec:
269
+
270
+
```yml
271
+
livenessProbe:
272
+
httpGet:
273
+
path: /healthz
274
+
port: 8000
275
+
readinessProbe:
276
+
277
+
httpGet:
278
+
path: /readiness
279
+
port: 8000
280
+
initialDelaySeconds: 20
281
+
timeoutSeconds: 5
282
+
```
283
+
284
+
See [this article](https://www.ianlewis.org/en/kubernetes-health-checks-django) as a reference for how health checks have been implemented.
285
+
286
+
## Celery
314
287
315
-
## Healthchecks
288
+
Next, let's add a deployment for Celery.
289
+
290
+
::: warning TODO
291
+
Not finished
292
+
:::
293
+
294
+
## Websockets
295
+
296
+
Next, let's add a deployment for Django Channels.
297
+
298
+
::: warning TODO
299
+
Not finished
300
+
:::
301
+
302
+
## Cypress tests against the minikube cluster
303
+
304
+
Now that we have implemented all parts of our application in minikube, let's run our tests against the cluster. Run the following command to open Cypress:
305
+
306
+
```
307
+
$(npm bin)/cypress open --config baseUrl=http://minikube.local
308
+
```
316
309
310
+
Click `Run all specs` and make sure there are no errors in the test results.
0 commit comments