diff --git a/.DS_Store b/.DS_Store new file mode 100644 index 0000000..1ea5da3 Binary files /dev/null and b/.DS_Store differ diff --git a/README.md b/README.md deleted file mode 100644 index e11da43..0000000 --- a/README.md +++ /dev/null @@ -1,51 +0,0 @@ -# Infrastructure As Code Tutorial - -[![license](https://img.shields.io/github/license/Artemmkin/infrastructure-as-code-tutorial.svg)](https://github.com/Artemmkin/infrastructure-as-code-tutorial/blob/master/LICENSE) -[![Tweet](https://img.shields.io/twitter/url/http/shields.io.svg?style=social)](https://twitter.com/intent/tweet?text=Learn%20about%20Infrastructure%20as%20Code%20https%3A%2F%2Fgithub.com%2FArtemmkin%2Finfrastructure-as-code-tutorial%20%20Tutorial%20created%20by%20@artemmkins%20covers%20%23Packer,%20%23Terraform,%20%23Ansible,%20%23Vagrant,%20%23Docker,%20and%20%23Kubernetes.%20%23DevOps) - -This tutorial is intended to show what the **Infrastructure as Code** (**IaC**) is, why we need it, and how it can help you manage your infrastructure more efficiently. - -It is practice-based, meaning I don't give much theory on what Infrastructure as Code is in the beginning of the tutorial, but instead let you feel it through the practice first. At the end of the tutorial, I summarize some of the key points about Infrastructure as Code based on what you learn through the labs. - -This tutorial is not meant to give a complete guide on how to use a specific tool like Ansible or Terraform, instead it focuses on how these tools work in general and what problems they solve. - -> The tutorial was inspired by [Kubernetes the Hard Way](https://github.com/kelseyhightower/kubernetes-the-hard-way) tutorial. I used it as an example to structure this one. - -_See [my presentation at DevOpsDays Silicon Valley](https://www.youtube.com/watch?v=XbcW2B7roLo&t=) in which I talk more in depth about the tutorial._ - -## Target Audience - -The target audience for this tutorial is anyone who loves or/and works in IT. - -## Tools Covered - -* Packer -* Terraform -* Ansible -* Vagrant -* Docker -* Docker Compose -* Kubernetes - -## Results of completing the tutorial - -By the end of this tutorial, you'll make your own repository looking like [this one](https://github.com/Artemmkin/infrastructure-as-code-example). - -NOTE: you can use this [example repository](https://github.com/Artemmkin/infrastructure-as-code-example) in case you get stuck in some of the labs. - -## Labs - -This tutorial assumes you have access to the Google Cloud Platform. While GCP is used for basic infrastructure requirements the lessons learned in this tutorial can be applied to other platforms. - -* [Introduction](docs/00-introduction.md) -* [Prerequisites](docs/01-prerequisites.md) -* [Manual Operations](docs/02-manual-operations.md) -* [Scripts](docs/03-scripts.md) -* [Packer](docs/04-packer.md) -* [Terraform](docs/05-terraform.md) -* [Ansible](docs/06-ansible.md) -* [Vagrant](docs/07-vagrant.md) -* [Docker](docs/08-docker.md) -* [Docker Compose](docs/09-docker-compose.md) -* [Kubernetes](docs/10-kubernetes.md) -* [What is Infrastructure as Code?](docs/50-what-is-iac.md) diff --git a/docs/00-introduction.adoc b/docs/00-introduction.adoc new file mode 100644 index 0000000..f0dce87 --- /dev/null +++ b/docs/00-introduction.adoc @@ -0,0 +1,23 @@ += Introduction + +Let's dream for a little bit... + +Imagine that you're a young developer who developed a web application. +You run and test your application locally and everything works great, which makes you very happy. +You believe that this is going to blow the minds of Internet users and bring you a lot of money. + +Then you realize that there is a small problem. +You ask yourself a question: "How do I make my application available to the Internet users?" + +You're thinking that you can't run the application locally all the time, because your old laptop will become slow for other tasks and will probably crash if a lot of users will be using your app at the same time. +Besides, your ISP changes randomly the public IP for your router, so you don't know on which IP address your application will be accessible to the public at any given moment. + +You start realizing that the problem you're facing is not as small as you thought. +In fact, there is a whole new craft for you to learn in IT world about running software applications and making sure they are always available to the users. + +The craft is called *IT operations*. +And in almost every IT department, there is an operations (Ops) team who manages the platform where the applications are running. + +The tutorial you are about to begin will give you, a young developer, a bit of a glance into what operations work look like and how you can do this work more efficiently by using *Infrastructure as Code* approach. + +Next: xref:01-prerequisites.adoc[Prerequisites] diff --git a/docs/00-introduction.md b/docs/00-introduction.md deleted file mode 100644 index 307ff9b..0000000 --- a/docs/00-introduction.md +++ /dev/null @@ -1,17 +0,0 @@ -# Introduction - -Let's dream for a little bit... - -Imagine that you're a young developer who developed a web application. You run and test your application locally and everything works great, which makes you very happy. You believe that this is going to blow the minds of Internet users and bring you a lot of money. - -Then you realize that there is a small problem. You ask yourself a question: "How do I make my application available to the Internet users?" - -You're thinking that you can't run the application locally all the time, because your old laptop will become slow for other tasks and will probably crash if a lot of users will be using your app at the same time. Besides, your ISP changes randomly the public IP for your router, so you don't know on which IP address your application will be accessible to the public at any given moment. - -You start realizing that the problem you're facing is not as small as you thought. In fact, there is a whole new craft for you to learn in IT world about running software applications and making sure they are always available to the users. - -The craft is called **IT operations**. And in almost every IT department, there is an operations (Ops) team who manages the platform where the applications are running. - -The tutorial you are about to begin will give you, a young developer, a bit of a glance into what operations work look like and how you can do this work more efficiently by using **Infrastructure as Code** approach. - -Next: [Prerequisites](01-prerequisites.md) diff --git a/docs/01-prerequisites.adoc b/docs/01-prerequisites.adoc new file mode 100644 index 0000000..548a4c3 --- /dev/null +++ b/docs/01-prerequisites.adoc @@ -0,0 +1,38 @@ += Prerequisites + +== Google Cloud Platform + +In this tutorial, we use the https://cloud.google.com/[Google Cloud Platform] to provision the compute infrastructure. +You have already signed up. + +Start in the Google Cloud Shell. +https://cloud.google.com/shell/docs/using-cloud-shell[(review)] + +== Google Cloud Platform + +=== Set a Default Project, Compute Region and Zone + +This tutorial assumes a default compute region and zone have been configured. + +Set a default compute region appropriate to your location (https://cloud.google.com/compute/docs/regions-zones[GCP regions and zones]): + +[source,bash] +---- +$ gcloud config set compute/region us-central1 +---- + +Set a default compute zone appropriate to the zone: + +[source,bash] +---- +$ gcloud config set compute/zone us-central1-c +---- + +Verify the configuration settings: + +[source,bash] +---- +$ gcloud config list +---- + +Next: xref:02-manual-operations.adoc[Manual operations] diff --git a/docs/01-prerequisites.md b/docs/01-prerequisites.md deleted file mode 100644 index b36ea60..0000000 --- a/docs/01-prerequisites.md +++ /dev/null @@ -1,51 +0,0 @@ -# Prerequisites - -## Google Cloud Platform - -In this tutorial, we use the [Google Cloud Platform](https://cloud.google.com/) to provision the compute infrastructure. You can [sign up](https://cloud.google.com/free/) for $300 in free credits, which will be more than sufficient to complete all of the labs in this tutorial. - -## Google Cloud Platform SDK - -### Install the Google Cloud SDK - -Follow the Google Cloud SDK [documentation](https://cloud.google.com/sdk/) to install and configure the `gcloud` command line utility for your platform. - -Verify the Google Cloud SDK version is 183.0.0 or higher: - -```bash -$ gcloud version -``` - -### Set Application Default Credentials - -This tutorial assumes Application Default Credentials (ADC) were set to authenticate to Google Cloud Platform API. - -Use the following gcloud command to acquire new user credentials to use for ADC. - -```bash -$ gcloud auth application-default login -``` - -### Set a Default Project, Compute Region and Zone - -This tutorial assumes a default compute region and zone have been configured. - -Set a default compute region: - -```bash -$ gcloud config set compute/region europe-west1 -``` - -Set a default compute zone: - -```bash -$ gcloud config set compute/zone europe-west1-b -``` - -Verify the configuration settings: - -```bash -$ gcloud config list -``` - -Next: [Manual operations](02-manual-operations.md) \ No newline at end of file diff --git a/docs/02-manual-operations.adoc b/docs/02-manual-operations.adoc new file mode 100644 index 0000000..7d0eb27 --- /dev/null +++ b/docs/02-manual-operations.adoc @@ -0,0 +1,195 @@ += Manual Operations + +To better understand the `Infrastructure as Code` (`IaC`) concept, we will first define the problem we are facing and deal with it with manually to get our hands dirty and see how things work overall. + +== Intro + +Imagine you have developed a new cool application called https://github.com/dm-academy/node-svc-v1[node-svc]. + +You want to run your application on a dedicated server and make it available to the Internet users. + +You heard about the `public cloud` thing, which allows you to provision compute resources and pay only for what you use. +You believe it's a great way to test your idea of an application and see if people like it. + +You've signed up for a free tier of https://cloud.google.com/[Google Cloud Platform] (GCP) and are about to start deploying your application. + +== Provision Compute Resources + +First thing we will do is to provision a virtual machine (VM) inside GCP for running the application. + +Use the following gcloud command in your terminal to launch a VM with Ubuntu 16.04 distro: + +[source,bash] +---- +$ gcloud compute instances create node-svc\ + --image-family ubuntu-minimal-2004-lts \ + --image-project ubuntu-os-cloud \ + --boot-disk-size 10GB \ + --machine-type f1-micro +---- + +== Create an SSH key pair + +Generate an SSH key pair for future connections to the VM instances (run the command exactly as it is): + +[source,bash] +---- +$ ssh-keygen -t rsa -f ~/.ssh/node-user -C node-user -P "" +---- + +Create an SSH public key for your project: + +[source,bash] +---- +$ gcloud compute project-info add-metadata \ + --metadata ssh-keys="node-user:$(cat ~/.ssh/node-user.pub)" +---- + +Check your ssh-agent is running: + +[source,bash] +---- +$ echo $SSH_AGENT_PID +---- + +If you get a number, it is running. +If you get nothing, then run: + +[source,bash] +---- +$ eval `ssh-agent` +---- + +Add the SSH private key to the ssh-agent: + + $ ssh-add ~/.ssh/node-user + +Verify that the key was added to the ssh-agent: + +[source,bash] +---- +$ ssh-add -l +---- + +== Install Application Dependencies + +To start the application, you need to first configure the environment for running it. + +Connect to the started VM via SSH using the following two commands: + +[source,bash] +---- +$ INSTANCE_IP=$(gcloud --format="value(networkInterfaces[0].accessConfigs[0].natIP)" compute instances describe node-svc) +$ ssh node-user@${INSTANCE_IP} +---- + +Install Node and npm: + +[source,bash] +---- +$ +$ sudo apt-get install -y nodejs npm +---- + +Check the installed version of Node: + +[source,bash] +---- +$ node -v +---- + +Install `git`: + +[source,bash] +---- +$ sudo apt -y install git +---- + +Clone the application repo into the home directory of `node-user` user (reminder, how do you clone to the right location?): + +[source,bash] +---- +$ git clone https://github.com/dm-academy/node-svc-v1 +---- + +Navigate to the repo (`cd node-svc-v1`) and check out the 02 branch (matching this lesson) + +[source,bash] +---- +$ git checkout 02 +Branch 02 set up to track remote branch 02 from origin. +Switched to a new branch '02' +---- + +Initialize npm (Node Package Manager) and install express: + +[source,bash] +---- +$ npm install +$ npm install express +---- + +== Start the Application + +Look at the server.js file (`cat`). +We will discuss in class. + +Start the Node web server: + +[source,bash] +---- +$ nodejs server.js & +Running on 3000 +---- + +Test it: + +[source,bash] +---- +$ curl localhost:3000 +Successful request. +---- + +== Access the Application + +Open a firewall port the application is listening on (note that the following command should be run on the Google Cloud Shell): + +[source,bash] +---- +$ gcloud compute firewall-rules create allow-node-svc-tcp-3000 \ + --network default \ + --action allow \ + --direction ingress \ + --rules tcp:3000 \ + --source-ranges 0.0.0.0/0 +---- + +Get the public IP of the VM: + +[source,bash] +---- +$ gcloud --format="value(networkInterfaces[0].accessConfigs[0].natIP)" compute instances describe node-svc-instance +---- + +Now open your browser and try to reach the application at the public IP and port 3000. + +For example, I put in my browser the following URL http://104.155.1.152:3000, but note that you'll have your own IP address. + +== Conclusion + +Congrats! +You've just deployed your application. +It is running on a dedicated set of compute resources in the cloud and is accessible by a public IP. +Now Internet users can enjoy using your application. + +Now that you've got the idea of what sort of steps you have to take to deploy your code from your local machine to a virtual server running in the cloud, let's see how we can do it more efficiently. + +Destroy the current VM and firewall rule and move to the next step: + +[source,bash] +---- +$ gcloud compute instances delete -q node-svc +$ gcloud compute firewall-rules delete -q allow-node-svc-tcp-9292 +---- + +Next: xref:03-scripts.adoc[Scripts] diff --git a/docs/02-manual-operations.md b/docs/02-manual-operations.md deleted file mode 100644 index 318b291..0000000 --- a/docs/02-manual-operations.md +++ /dev/null @@ -1,193 +0,0 @@ -# Manual Operations - -To better understand the `Infrastructure as Code` (`IaC`) concept, we will first define the problem we are facing and deal with it with manually to get our hands dirty and see how things work overall. - -## Intro - -Imagine you have developed a new cool application called [raddit](https://github.com/Artemmkin/raddit). - -You want to run your application on a dedicated server and make it available to the Internet users. - -You heard about the `public cloud` thing, which allows you to provision compute resources and pay only for what you use. You believe it's a great way to test your idea of an application and see if people like it. - -You've signed up for a free tier of [Google Cloud Platform](https://cloud.google.com/) (GCP) and are about to start deploying your application. - -## Provision Compute Resources - -First thing we will do is to provision a virtual machine (VM) inside GCP for running the application. - -Use the following gcloud command in your terminal to launch a VM with Ubuntu 16.04 distro: - -```bash -$ gcloud compute instances create raddit-instance-2 \ - --image-family ubuntu-1604-lts \ - --image-project ubuntu-os-cloud \ - --boot-disk-size 10GB \ - --machine-type n1-standard-1 -``` - -## Create an SSH key pair - -Generate an SSH key pair for future connections to the VM instances (run the command exactly as it is): - -```bash -$ ssh-keygen -t rsa -f ~/.ssh/raddit-user -C raddit-user -P "" -``` - -Create an SSH public key for your project: - -```bash -$ gcloud compute project-info add-metadata \ - --metadata ssh-keys="raddit-user:$(cat ~/.ssh/raddit-user.pub)" -``` - -Add the SSH private key to the ssh-agent: - -``` -$ ssh-add ~/.ssh/raddit-user -``` - -Verify that the key was added to the ssh-agent: - -```bash -$ ssh-add -l -``` - -## Install Application Dependencies - -To start the application, you need to first configure the environment for running it. - -Connect to the started VM via SSH: - -```bash -$ INSTANCE_IP=$(gcloud --format="value(networkInterfaces[0].accessConfigs[0].natIP)" compute instances describe raddit-instance-2) -$ ssh raddit-user@${INSTANCE_IP} -``` - -Install Ruby: - -```bash -$ sudo apt-get update -$ sudo apt-get install -y ruby-full build-essential -``` - -Check the installed version of Ruby: - -```bash -$ ruby -v -``` - -Install Bundler: - -```bash -$ sudo gem install --no-rdoc --no-ri bundler -$ bundle version -``` - -Clone the [application repo](https://github.com/Artemmkin/raddit), but first make sure `git` is installed: -```bash -$ git version -``` - -At the time of writing the latest image of Ubuntu 16.04 which GCP provides has `git` preinstalled, so we can skip this step. - -Clone the application repo into the home directory of `raddit-user` user: - -```bash -$ git clone https://github.com/Artemmkin/raddit.git -``` - -Install application dependencies using Bundler: - -```bash -$ cd ./raddit -$ sudo bundle install -``` - -## Prepare Database - -Install MongoDB which your application uses: - -```bash -$ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv EA312927 -$ echo "deb http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.2 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.2.list -$ sudo apt-get update -$ sudo apt-get install -y mongodb-org -``` - -Start MongoDB and enable autostart: - -```bash -$ sudo systemctl start mongod -$ sudo systemctl enable mongod -``` - -Verify that MongoDB is running: - -```bash -$ sudo systemctl status mongod -``` - -## Start the Application - -Download a systemd unit file for starting the application from a gist: - -```bash -$ wget https://gist.githubusercontent.com/Artemmkin/ce82397cfc69d912df9cd648a8d69bec/raw/7193a36c9661c6b90e7e482d256865f085a853f2/raddit.service -``` - -Move it to the systemd directory - -```bash -$ sudo mv raddit.service /etc/systemd/system/raddit.service -``` - -Now start the application and enable autostart: - -```bash -$ sudo systemctl start raddit -$ sudo systemctl enable raddit -``` - -Verify that it's running: - -```bash -$ sudo systemctl status raddit -``` - -## Access the Application - -Open a firewall port the application is listening on (note that the following command should be run on your local machine): - -```bash -$ gcloud compute firewall-rules create allow-raddit-tcp-9292 \ - --network default \ - --action allow \ - --direction ingress \ - --rules tcp:9292 \ - --source-ranges 0.0.0.0/0 -``` - -Get the public IP of the VM: - -```bash -$ gcloud --format="value(networkInterfaces[0].accessConfigs[0].natIP)" compute instances describe raddit-instance-2 -``` - -Now open your browser and try to reach the application at the public IP and port 9292. - -For example, I put in my browser the following URL http://104.155.1.152:9292, but note that you'll have your own IP address. - -## Conclusion - -Congrats! You've just deployed your application. It is running on a dedicated set of compute resources in the cloud and is accessible by a public IP. Now Internet users can enjoy using your application. - -Now that you've got the idea of what sort of steps you have to take to deploy your code from your local machine to a virtual server running in the cloud, let's see how we can do it more efficiently. - -Destroy the current VM and move to the next step: - -```bash -$ gcloud compute instances delete raddit-instance-2 -``` - -Next: [Scripts](03-scripts.md) diff --git a/docs/03-scripts.adoc b/docs/03-scripts.adoc new file mode 100644 index 0000000..c1742bf --- /dev/null +++ b/docs/03-scripts.adoc @@ -0,0 +1,246 @@ += Scripts + +In the previous lab, you deployed the https://github.com/dm-academy/node-svc[node-svc] application by connecting to a VM via SSH and running commands in the terminal one by one. +In this lab, we'll try to automate this process a little by using `scripts`. + +== Intro + +Now think about what happens if your application becomes so popular that one virtual machine can't handle all the load of incoming requests. +Or what happens when your application somehow crashes? +Debugging a problem can take a long time and it would most likely be much faster to launch and configure a new VM than trying to fix what's broken. + +In all of these cases we face the task of provisioning new virtual machines, installing the required software and repeating all of the configurations we've made in the previous lab over and over again. + +Doing it manually is boring, error-prone and time-consuming. + +The most obvious way for improvement is using Bash scripts which allow us to run sets of commands put in a single file. +So let's try this. + +== Infrastructure as Code project + +Starting from this lab, we're going to use a git repo for saving all the work done in this tutorial. + +Go to your Github account and create a new repository called iac-repo. +No README or .gitignore. +Copy the URL. + +Clone locally: + +[source,bash] +---- +$ git clone +---- + +Create a directory for this lab: + +[source,bash] +---- +$ cd iac-repo +$ mkdir 03-script +$ cd 03-script +---- + +To push your changes up to Github: + +[source,bash] +---- +$ git add . -A +$ git commit -m "first lab 03 commit" # should be relevant to the changes you made +$ git push origin master +---- + +Always issue these commands several times during each session. + +== Provisioning script + +We can automate the process of creating the VM and the firewall rule. + +In the `script` directory create a script `provision.sh`: + +[source,bash] +---- +#!/bin/bash +# add new VM +gcloud compute instances create node-svc \ + --image-family ubuntu-minimal-2004-lts \ + --image-project ubuntu-os-cloud \ + --boot-disk-size 10GB \ + --machine-type f1-micro + +# add firewall rule +gcloud compute firewall-rules create allow-node-svc-tcp-3000 \ + --network default \ + --action allow \ + --direction ingress \ + --rules tcp:3000 \ + --source-ranges 0.0.0.0/0 +---- + +Run it in the Google Cloud Shell: + +[source,bash] +---- +$ chmod +x provision.sh # changing permissions +$ ./provision.sh # you have to include the './' +---- + +You should see results similar to: + +[source,bash] +---- +WARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#performance. +Created [https://www.googleapis.com/compute/v1/projects/proven-sum-252123/zones/us-central1-c/instances/node-svc]. +NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS +node-svc us-central1-c n1-standard-1 10.128.15.202 34.69.206.6 RUNNING +Creating firewall...⠹Created [https://www.googleapis.com/compute/v1/projects/proven-sum-252123/global/firewalls/allow-node-svc-3000]. +Creating firewall...done. +NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED +allow-node-svc-3000 default INGRESS 1000 tcp:3000 False +---- + +== Installation script + +Before we can run our application, we need to create a running environment for it by installing dependent packages and configuring the OS. +Then we copy the application, initialize NPM and download express.js, and start the server. + +We are going to use the same commands we used before to do that, but this time, instead of running commands one by one, we'll create a `bash script` to save us some struggle. + +In the `03-script` directory create bash script `config.sh` to install node, npm, express, and git. +Create a script `install.sh` to download the app and initialize node. + +[source,bash] +---- +#!/bin/bash +set -e # exit immediately if anything returns non-zero. See https://www.javatpoint.com/linux-set-command + +echo " ----- install node, npm, git ----- " +apt-get update +apt-get install -y nodejs npm git +---- + +[source,bash] +---- +#!/bin/bash +set -e # exit immediately if anything returns non-zero. See https://www.javatpoint.com/linux-set-command + +echo " ----- download, initialize, and run app ----- " +git clone https://github.com/dm-academy/node-svc-v1 +cd node-svc-v1 +git checkout 02 +npm install +npm install express +---- + +NOTE: Why two scripts? +Discuss in class. + +== Run the scripts + +Copy the script to the created VM: + +[source,bash] +---- +$ INSTANCE_IP=$(gcloud --format="value(networkInterfaces[0].accessConfigs[0].natIP)" compute instances describe node-svc) +$ scp -r config.sh install.sh node-user@${INSTANCE_IP}:/home/node-user +---- + +If sucessful, you should see something like: + +[source,bash] +---- +config.sh 100% 214 279.9KB/s 00:00 +install.sh 100% 214 279.9KB/s 00:00 +---- + +NOTE: If you get an `offending ECDSA key` error, use the suggested removal command. + +NOTE: If you get the error `Permission denied (publickey).`, this probably means that your ssh-agent no longer has the node-user private key added. +This easily happens if the Google Cloud Shell goes to sleep and wipes out your session. +Check via issuing `ssh-add -l`. + +If you get a message to the effect that your agent is not running, type `eval `ssh-agent`` and then `ssh-add -l`. + +You should see something like `2048 SHA256:bII5VsQY3fCWXEai0lUeChEYPaagMXun3nB9U2eoUEM /home/betz4871/.ssh/node-user (RSA)`. +If you do not, re-issue the command `ssh-add ~/.ssh/node-user` and re-confirm with `ssh-add -l`. + +Connect to the VM via SSH: + +[source,bash] +---- +$ ssh node-user@${INSTANCE_IP} +---- + +Have a look at what's in the directory (use `ls` and `cat`). +Do you understand exactly how it got there? +If you do not, ask. + +Run the script and launch the server: + +[source,bash] +---- +$ chmod +x *.sh +$ sudo ./config.sh && ./install.sh # running 2 commands on one line +$ sudo nodejs node-svc-v1/server.js & +---- + +The last output should be `Running on 3000`. +You may need to hit Return or Enter to get a command prompt. + +To test that the server is running locally, type: + +[source,bash] +---- +$ curl localhost:3000 +---- + +You should receive this: + +[source,bash] +---- +Successful request. +---- + +== Access the Application + +Access the application in your browser by its public IP (don't forget to specify the port 3000). + +Open another terminal and run the following command to get a public IP of the VM: + +[source,bash] +---- +$ gcloud --format="value(networkInterfaces[0].accessConfigs[0].natIP)" compute instances describe node-svc +---- + +== Destroy (de-provision) the resources by script + +In the `provision` directory create a script `deprovision.sh`. + +[source,bash] +---- +#!/bin/bash +gcloud compute instances delete -q node-svc +gcloud compute firewall-rules delete -q allow-node-svc-tcp-3000 +---- + +Set permissions correctly (see previous) and execute. +You should get results like: + +`+bash Deleted [https://www.googleapis.com/compute/v1/projects/proven-sum-252123/zones/us-central1-c/instances/node-svc]. +Deleted [https://www.googleapis.com/compute/v1/projects/proven-sum-252123/global/firewalls/allow-node-svc-tcp-3000].+` + +== Save and commit the work + +Save and commit the scripts created in this lab into your `iac-tutorial` repo. + +== Conclusion + +Scripts helped us to save some time and effort of manually running every command one by one to configure the system and start the application. + +The process of system configuration becomes more or less standardized and less error-prone, as you put commands in the order they should be run and test it to ensure it works as expected. + +It's also a first step we've made in the direction of automating operations work. + +But scripts are not suitable for every operations task and have many downsides. +We'll discuss more on that in the next labs. + +Next: xref:04-packer.adoc[Packer] diff --git a/docs/03-scripts.md b/docs/03-scripts.md deleted file mode 100644 index e53a828..0000000 --- a/docs/03-scripts.md +++ /dev/null @@ -1,156 +0,0 @@ -# Scripts - -In the previous lab, you deployed the [raddit](https://github.com/Artemmkin/raddit) application by connecting to a VM via SSH and running commands in the terminal one by one. In this lab, we'll try to automate this process a little by using `scripts`. - -## Intro - -Now think about what happens if your application becomes so popular that one virtual machine can't handle all the load of incoming requests. Or what happens when your application somehow crashes? Debugging a problem can take a long time and it would most likely be much faster to launch and configure a new VM than trying to fix what's broken. - -In all of these cases we face the task of provisioning new virtual machines, installing the required software and repeating all of the configurations we've made in the previous lab over and over again. - -Doing it manually is `boring`, `error-prone` and `time-consuming`. - -The most obvious way for improvement is using Bash scripts which allow us to run sets of commands put in a single file. So let's try this. - -## Provision Compute Resources - -Start a new VM for this lab. The command should look familiar: - -```bash -$ gcloud compute instances create raddit-instance-3 \ - --image-family ubuntu-1604-lts \ - --image-project ubuntu-os-cloud \ - --boot-disk-size 10GB \ - --machine-type n1-standard-1 -``` - -## Infrastructure as Code project - -Starting from this lab, we're going to use a git repo for saving all the work done in this tutorial. - -Download a repo for the tutorial: - -```bash -$ git clone https://github.com/Artemmkin/iac-tutorial.git -``` - -Delete git information about a remote repository: -```bash -$ cd ./iac-tutorial -$ git remote remove origin -``` - -Create a directory for this lab: - -```bash -$ mkdir scripts -``` - -## Configuration script - -Before we can run our application, we need to create a running environment for it by installing dependent packages and configuring the OS. - -We are going to use the same commands we used before to do that, but this time, instead of running commands one by one, we'll create a `bash script` to save us some struggle. - -Create a bash script to install Ruby, Bundler and MongoDB, and copy a systemd unit file for the application. - -Save it to the `configuration.sh` file inside created `scripts` directory: - -```bash -#!/bin/bash -set -e - -echo " ----- install ruby and bundler ----- " -apt-get update -apt-get install -y ruby-full build-essential -gem install --no-rdoc --no-ri bundler - -echo " ----- install mongodb ----- " -apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv EA312927 -echo "deb http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.2 multiverse" > /etc/apt/sources.list.d/mongodb-org-3.2.list -apt-get update -apt-get install -y mongodb-org - -echo " ----- start mongodb ----- " -systemctl start mongod -systemctl enable mongod - -echo " ----- copy unit file for application ----- " -wget https://gist.githubusercontent.com/Artemmkin/ce82397cfc69d912df9cd648a8d69bec/raw/7193a36c9661c6b90e7e482d256865f085a853f2/raddit.service -mv raddit.service /etc/systemd/system/raddit.service -``` - -## Deployment script - -Create a script for copying the application code from GitHub repository, installing dependent gems and starting it. - -Save it into `deploy.sh` file inside `scripts` directory: - -```bash -#!/bin/bash -set -e - -echo " ----- clone application repository ----- " -git clone https://github.com/Artemmkin/raddit.git - -echo " ----- install dependent gems ----- " -cd ./raddit -sudo bundle install - -echo " ----- start the application ----- " -sudo systemctl start raddit -sudo systemctl enable raddit -``` - -## Run the scripts - -Copy the `scripts` directory to the created VM: - -```bash -$ INSTANCE_IP=$(gcloud --format="value(networkInterfaces[0].accessConfigs[0].natIP)" compute instances describe raddit-instance-3) -$ scp -r ./scripts raddit-user@${INSTANCE_IP}:/home/raddit-user -``` - -Connect to the VM via SSH: -```bash -$ ssh raddit-user@${INSTANCE_IP} -``` - -Run the scripts: -```bash -$ chmod +x ./scripts/*.sh -$ sudo ./scripts/configuration.sh -$ ./scripts/deploy.sh -``` - -## Access the Application - -Access the application in your browser by its public IP (don't forget to specify the port 9292). - -Open another terminal and run the following command to get a public IP of the VM: - -```bash -$ gcloud --format="value(networkInterfaces[0].accessConfigs[0].natIP)" compute instances describe raddit-instance-3 -``` - -## Save and commit the work - -Save and commit the scripts created in this lab into your `iac-tutorial` repo. - -## Conclusion - -Scripts helped us to save some time and effort of manually running every command one by one to configure the system and start the application. - -The process of system configuration becomes more or less standardized and less error-prone, as you put commands in the order they should be run and test it to ensure it works as expected. - -It's also a first step we've made in the direction of automating operations work. - -But scripts are not suitable for every operations task and have many downsides. We'll discuss more on that in the next labs. - -Destroy the current VM before moving onto the next step: - -```bash -$ gcloud compute instances delete raddit-instance-3 -``` - -Next: [Packer](04-packer.md) diff --git a/docs/04-packer.adoc b/docs/04-packer.adoc new file mode 100644 index 0000000..5ecdf88 --- /dev/null +++ b/docs/04-packer.adoc @@ -0,0 +1,244 @@ += Packer + +Scripts helped us speed up the process of system configuration, and made it more reliable compared to doing everything manually, but there are still ways for improvement. + +In this lab, we're going to take a look at the first IaC tool in this tutorial called https://www.packer.io/[Packer] and see how it can help us improve our operations. + +== Intro + +Remember how in the second lab we had to make install nodejs, npm, and even git on the VM so that we could clone the application repo? +Did it surprise you that `git` was not already installed on the system? + +Imagine how nice it would be to have required packages like nodejs and npm preinstalled on the VM we provision, or have necessary configuration files come with the image, too. +This would require even less time and effort from us to configure the system and run our application. + +Luckily, we can create custom machine images with required configuration and software installed using Packer, an IaC tool by Hashicorp. +Let's check it out. + +== Install Packer + +https://www.packer.io/downloads.html[Download] and install Packer onto your system (this means the Google Cloud Shell). +You will need to figure this out. + +If you have issues, consult https://github.com/dm-academy/iac-tutorial-rsrc/blob/master/packer/install-packer.sh[this script]. + +Check the version to verify that it was installed: + +[source,bash] +---- +$ packer -v +---- + +== Infrastructure as Code project + +Create a new directory called `04-packer` inside your `iac-tutorial` repo, which we'll use to save the work done in this lab. + +== Define image builder + +The way Packer works is simple. +It starts a VM with specified characteristics, configures the operating system and installs the software you specify, and then it creates a machine image from that VM. + +The part of packer responsible for starting a VM and creating an image from it is called https://www.packer.io/docs/builders/index.html[builder]. + +So before using packer to create images, we need to define a builder configuration in a JSON file (which is called *template* in Packer terminology). + +Create a `node-svc-base-image.json` file inside the `packer` directory with the following content (make sure to change the project ID, and also the zone in case it's different): + +[source,json] +---- +{ + "builders": [ + { + "type": "googlecompute", + "project_id": "YOUR PROJECT HERE. YOU MUST CHANGE THIS", + "zone": "us-central1-c", + "machine_type": "f1-micro", + "source_image_family": "ubuntu-minimal-2004-lts", + "image_name": "node-svc-base-{{isotime \"2006-01-02 03:04:05\"}}", + "image_family": "node-svc-base", + "image_description": "Ubuntu 16.04 with git, nodejs, npm preinstalled", + "ssh_username": "node-user" + } + ] +} +---- + +This template describes where and what type of a VM to launch for image creation (`type`, `project_id`, `zone`, `machine_type`, `source_image_family`). +It also defines image saving configuration such as under which name (`image_name`) and image family (`image_family`) the resulting image should be saved and what description to give it (`image_description`). +SSH user configuration is used by provisioners which will talk about later. + +Validate the template: + +[source,bash] +---- +$ packer validate node-svc-base-image.json +---- + +== Define image provisioner + +As we already mentioned, builders are only responsible for starting a VM and creating an image from that VM. +The real work of system configuration and installing software on the running VM is done by another Packer component called *provisioner*. + +Add a https://www.packer.io/docs/provisioners/shell.html[shell provisioner] to your template to run the `deploy.sh` script you created in the previous lab. + +Your template should look similar to this one: + +[source,json] +---- +{ + "builders": [ + { + "type": "googlecompute", + "project_id": "YOUR PROJECT HERE. YOU MUST CHANGE THIS", + "zone": "us-central1-c", + "machine_type": "f1-micro", + "source_image_family": "ubuntu-1604-lts", + "image_name": "node-svc-base-{{isotime `20200901-000001`}}", + "image_family": "node-svc-base", + "image_description": "Ubuntu 16.04 with git, nodejs, npm, and node-svc preinstalled", + "ssh_username": "node-user" + } + ], + "provisioners": [ + { + "type": "shell", + "script": "{{template_dir}}/../03-script/config.sh", + "execute_command": "sudo {{.Path}}" + } + ] +} +---- + +Make sure the template is valid: + +[source,bash] +---- +$ packer validate ./packer/node-base-image.json +---- + +== Create custom machine image + +Build the image for your application: + +[source,bash] +---- +$ packer build node-svc-base-image.json +---- + +If you go to the https://console.cloud.google.com/compute/images[Compute Engine Images] page you should see your new custom image. + +== Launch a VM with your custom built machine image + +Once the image is built, use it as a boot disk to start a VM: + +[source,bash] +---- +$ gcloud compute instances create node-svc \ + --image-family node-svc-base \ + --boot-disk-size 10GB \ + --machine-type f1-micro +---- + +== Deploy Application + +Copy the installation script to the VM: + +$ INSTANCE_IP=$(gcloud --format="value(networkInterfaces[0].accessConfigs[0].natIP)" compute instances describe node-svc) $ scp -r ../03-script/install.sh node-user@$\{INSTANCE_IP}:/home/node-user + +Connect to the VM via SSH: + +[source,bash] +---- +$ ssh node-user@${INSTANCE_IP} +---- + +NOTE: If you get an offending ECDSA key error, use the suggested removal command. + +NOTE: If you get the error `Permission denied (publickey).`, this probably means that your ssh-agent no longer has the node-user private key added. +This easily happens if the Google Cloud Shell goes to sleep and wipes out your session. +Check via issuing `ssh-add -l`. +You should see something like `2048 SHA256:bII5VsQY3fCWXEai0lUeChEYPaagMXun3nB9U2eoUEM /home/betz4871/.ssh/node-user (RSA)`. +If you do not, re-issue the command `ssh-add ~/.ssh/node-user` and re-confirm with `ssh-add -l`. + +Verify git, nodejs, and npmare installed. +Do you understand how they got there? +(Your results may be slightly different, but if you get errors, investigate or ask for help): + +[source,bash] +---- +node-user@node-svc:~$ npm -v +6.14.4 +node-user@node-svc:~$ node -v +v10.19.0 +node-user@node-svc:~$ git --version +git version 2.25.1 +---- + +Run the installation script, and then the server: + +[source,bash] +---- +$ chmod +x *.sh +$ sudo ./install.sh +$ sudo nodejs node-svc-v1/server.js & +---- + +== Access Application + +Manually re-create the firewall rule: + +[source,bash] +---- +$ gcloud compute firewall-rules create allow-node-svc-tcp-3000 \ + --network default \ + --action allow \ + --direction ingress \ + --rules tcp:3000 \ + --source-ranges 0.0.0.0/0 +---- + +Open another terminal and run the following command to get a public IP of the VM: + +[source,bash] +---- +$ gcloud --format="value(networkInterfaces[0].accessConfigs[0].natIP)" compute instances describe node-svc +---- + +Access the application in your browser by its public IP (don't forget to specify the port 3000). + +== De-provision + +[source,bash] +---- +$ ../03-script/deprovision.sh #notice path +---- + +== Save and commit the work + +Save and commit the packer template created in this lab into your `iac-tutorial` repo. + +== Learning more about Packer + +Packer configuration files are called templates for a reason. +They often get parameterized with https://www.packer.io/docs/templates/user-variables.html[user variables]. +This could be very helpful since you can create multiple machine images with different configurations for different purposes using one template file. + +Adding user variables to a template is easy, follow the https://www.packer.io/docs/templates/user-variables.html[documentation] on how to do that. + +== Immutable infrastructure + +By putting everything inside the image including the application, we have achieved an https://martinfowler.com/bliki/ImmutableServer.html[immutable infrastructure]. +It is based on the idea `we build it once, and we never change it`. + +It has advantages of spending less time (zero in this case) on system configuration after VM's start, and prevents *configuration drift*, but it's also not easy to implement. + +== Conclusion + +In this lab you've used Packer to create a custom machine image for running your application. + +Its advantages include: + +* `It requires less time and effort to configure a new VM for running the application` +* `System configuration becomes more reliable.` When we start a new VM to deploy the application, we know for sure that it has the right packages installed and configured properly, since we built and tested the image. + +Next: xref:05-terraform.adoc[Terraform] diff --git a/docs/04-packer.md b/docs/04-packer.md deleted file mode 100644 index 6e54a67..0000000 --- a/docs/04-packer.md +++ /dev/null @@ -1,194 +0,0 @@ -# Packer - -Scripts helped us speed up the process of system configuration, and made it more reliable compared to doing everything manually, but there are still ways for improvement. - -In this lab, we're going to take a look at the first IaC tool in this tutorial called [Packer](https://www.packer.io/) and see how it can help us improve our operations. - -## Intro - -Remember how in the second lab we had to make sure that the `git` was installed on the VM so that we could clone the application repo? Did it surprise you in a good way that the `git` was already installed on the system and we could skip the installation? - -Imagine how nice it would be to have other required packages like Ruby and Bundler preinstalled on the VM we provision, or have necessary configuration files come with the image, too. This would require even less time and effort from us to configure the system and run our application. - -Luckily, we can create custom machine images with required configuration and software installed using Packer. Let's check it out. - -## Install Packer - -[Download](https://www.packer.io/downloads.html) and install Packer onto your system. - -Check the version to verify that it was installed: - -```bash -$ packer -v -``` - -## Infrastructure as Code project - -Create a new directory called `packer` inside your `iac-tutorial` repo, which we'll use to save the work done in this lab. - -## Define image builder - -The way Packer works is simple. It starts a VM with specified characteristics, configures the operating system and installs the software you specify, and then it creates a machine image from that VM. - -The part of packer responsible for starting a VM and creating an image from it is called [builder](https://www.packer.io/docs/builders/index.html). - -So before using packer to create images, we need to define a builder configuration in a JSON file (which is called **template** in Packer terminology). - -Create a `raddit-base-image.json` file inside the `packer` directory with the following content (make sure to change the project ID and zone in case it's different): - -```json -{ - "builders": [ - { - "type": "googlecompute", - "project_id": "infrastructure-as-code", - "zone": "europe-west1-b", - "machine_type": "g1-small", - "source_image_family": "ubuntu-1604-lts", - "image_name": "raddit-base-{{isotime `20060102-150405`}}", - "image_family": "raddit-base", - "image_description": "Ubuntu 16.04 with Ruby, Bundler and MongoDB preinstalled", - "ssh_username": "raddit-user" - } - ] -} -``` - -This template describes where and what type of a VM to launch for image creation (`type`, `project_id`, `zone`, `machine_type`, `source_image_family`). It also defines image saving configuration such as under which name (`image_name`) and image family (`image_family`) the resulting image should be saved and what description to give it (`image_description`). SSH user configuration is used by provisioners which will talk about later. - -Validate the template: - -```bash -$ packer validate ./packer/raddit-base-image.json -``` - -## Define image provisioner - -As we already mentioned, builders are only responsible for starting a VM and creating an image from that VM. The real work of system configuration and installing software on the running VM is done by another Packer component called **provisioner**. - -Add a [shell provisioner](https://www.packer.io/docs/provisioners/shell.html) to your template to run the `configuration.sh` script you created in the previous lab. - -Your template should look similar to this one: - -```json -{ - "builders": [ - { - "type": "googlecompute", - "project_id": "infrastructure-as-code", - "zone": "europe-west1-b", - "machine_type": "g1-small", - "source_image_family": "ubuntu-1604-lts", - "image_name": "raddit-base-{{isotime `20060102-150405`}}", - "image_family": "raddit-base", - "image_description": "Ubuntu 16.04 with Ruby, Bundler and MongoDB preinstalled", - "ssh_username": "raddit-user" - } - ], - "provisioners": [ - { - "type": "shell", - "script": "{{template_dir}}/../scripts/configuration.sh", - "execute_command": "sudo {{.Path}}" - } - ] -} -``` - -Make sure the template is valid: - -```bash -$ packer validate ./packer/raddit-base-image.json -``` - -## Create custom machine image - -Build the image for your application: - -```bash -$ packer build ./packer/raddit-base-image.json -``` - -## Launch a VM with your custom built machine image - -Once the image is built, use it as a boot disk to start a VM: - -```bash -$ gcloud compute instances create raddit-instance-4 \ - --image-family raddit-base \ - --boot-disk-size 10GB \ - --machine-type n1-standard-1 -``` - -## Deploy Application - -Copy `deploy.sh` script to the created VM: - -```bash -$ INSTANCE_IP=$(gcloud --format="value(networkInterfaces[0].accessConfigs[0].natIP)" compute instances describe raddit-instance-4) -$ scp ./scripts/deploy.sh raddit-user@${INSTANCE_IP}:/home/raddit-user -``` - -Connect to the VM via SSH: - -```bash -$ ssh raddit-user@${INSTANCE_IP} -``` - -Verify Ruby, Bundler and MongoDB are installed: - -```bash -$ ruby -v -$ bundle version -$ sudo systemctl status mongod -``` - -Run deployment script: - -```bash -$ chmod +x ./deploy.sh -$ ./deploy.sh -``` - -## Access Application - -Access the application in your browser by its public IP (don't forget to specify the port 9292). - -Open another terminal and run the following command to get a public IP of the VM: - -```bash -$ gcloud --format="value(networkInterfaces[0].accessConfigs[0].natIP)" compute instances describe raddit-instance-4 -``` - -## Save and commit the work - -Save and commit the packer template created in this lab into your `iac-tutorial` repo. - -## Learning more about Packer - -Packer configuration files are called templates for a reason. They often get parameterized with [user variables](https://www.packer.io/docs/templates/user-variables.html). This could be very helpful since you can create multiple machine images with different configuration and for different purposes using one template file. - -Adding user variables to a template is easy, follow the [documentation](https://www.packer.io/docs/templates/user-variables.html) on how to do that. - -## Immutable infrastructure - -You may wonder why not to put everything inside the image including the application? Well, this approach is called an [immutable infrastructure](https://martinfowler.com/bliki/ImmutableServer.html). It is based on the idea `we build it once, and we never change it`. - -It has advantages of spending less time (zero in this case) on system configuration after VM's start, and prevents **configuration drift**, but it's also not easy to implement. - -## Conclusion - -In this lab you've used Packer to create a custom machine image for running your application. - -The advantages of its usage are quite obvious: - -* `It requires less time and effort to configure a new VM for running the application` -* `System configuration becomes more reliable.` When we start a new VM to deploy the application, we know for sure that it has the right packages installed and configured properly, since we built and tested the image. - -Destroy the current VM and move onto the next lab: - -```bash -$ gcloud compute instances delete raddit-instance-4 -``` - -Next: [Terraform](05-terraform.md) diff --git a/docs/05-terraform.adoc b/docs/05-terraform.adoc new file mode 100644 index 0000000..d54fbaa --- /dev/null +++ b/docs/05-terraform.adoc @@ -0,0 +1,344 @@ += Terraform + +In the previous lab, you used scripts to make your system configuration faster and more reliable. +But we still have a lot to improve. + +In this lab, we're going to learn about the IaC tool by HashiCorp called https://www.terraform.io/[Terraform]. + +== Intro + +Think about your current operations... + +Do you see any problems you may have, or any ways for improvement? + +Remember, that each time we want to deploy an application, we have to `provision` compute resources first, that is to start a new VM. + +We do it via a `gcloud` command like this: + +[source,bash] +---- +$ gcloud compute instances create node-svc \ + --image-family ubuntu-minimal-2004-lts \ + --boot-disk-size 10GB \ + --machine-type f1-micro +---- + +At this stage, it doesn't seem like there are any problems with this. +But, in fact, there are. + +Infrastructure for running your services and applications could be huge. +You might have tens, hundreds or even thousands of virtual machines, hundreds of firewall rules, multiple VPC networks and load balancers. +Additionally, the infrastructure could be split between multiple teams. +Such infrastructure looks, and is, very complex and yet should be run and managed in a consistent and predictable way. + +If we create and change infrastructure components using the Web User Interface (UI) Console or even the gcloud command ine interface (CLI) tool, over time we won't be able to describe exactly in which `state` our infrastructure is in right now, meaning `we lose control over it`. + +This happens because you tend to forget what changes you've made a few months ago and why you made them. +If multiple people across multiple teams are managing infrastructure, this makes things even worse. + +So we see here 2 clear problems: + +* we don't know the current state of our infrastructure +* we can't control the changes + +The second problem is dealt by source control tools like `git`, while the first one is solved by using tools like Terraform. +Let's find out how. + +== Terraform + +Terraform is already installed on Google Cloud Shell. + +If you want to install it on a laptop or VM, you can https://www.terraform.io/downloads.html[download here]. + +Make sure Terraform version is \=> 0.11.0: + +[source,bash] +---- +$ terraform -v +---- + +== Infrastructure as Code project + +Create a new directory called `05-terraform` inside your `iac-tutorial` repo, which we'll use to save the work done in this lab. + +== Describe VM instance + +_Terraform allows you to describe the desired state of your infrastructure and makes sure your desired state meets the actual state._ + +Terraform uses https://www.terraform.io/docs/configuration/resources.html[*resources*] to describe different infrastructure components. +If you want to use Terraform to manage an infrastructure component, you should first make sure there is a resource for that component for that particular platform. + +Let's use Terraform syntax to describe a VM instance that we want to be running. + +Create a Terraform configuration file called `main.tf` inside the `05-terraform` directory with the following content: + +---- +resource "google_compute_instance" "node-svc" { + name = "node-svc" + machine_type = "f1-micro" + zone = "us-central1-c" + + # boot disk specifications + boot_disk { + initialize_params { + image = "node-svc-base" // use image built with Packer + } + } + + # networks to attach to the VM + network_interface { + network = "default" + access_config {} // use ephemeral public IP + } +} +---- + +Here we use https://www.terraform.io/docs/providers/google/r/compute_instance.html[google_compute_instance] resource to manage a VM instance running in Google Cloud Platform. + +== Define Resource Provider + +One of the advantages of Terraform over other alternatives like https://aws.amazon.com/cloudformation/?nc1=h_ls[CloudFormation] is that it's `cloud-agnostic`, meaning it can work with many different cloud providers like AWS, GCP, Azure, or OpenStack. +It can also work with resources of different services like databases (e.g., PostgreSQL, MySQL), orchestrators (Kubernetes, Nomad) and https://www.terraform.io/docs/providers/[others]. + +This means that Terraform has a pluggable architecture and the pluggable component that allows it to work with a specific platform or service is called *provider*. + +So before we can actually create a VM using Terraform, we need to define a configuration of a https://www.terraform.io/docs/providers/google/index.html[google cloud provider] and download it on our system. + +Create another file inside `terraform` folder and call it `providers.tf`. +Put provider configuration in it: + +---- +provider "google" { + version = "~> 2.5.0" + project = "YOU MUST PUT YOUR PROJECT NAME HERE" + region = "us-central1-c" +} +---- + +Make sure to change the `project` value in provider's configuration above to your project's ID. +You can get your default project's ID by running the command: + +[source,bash] +---- +$ gcloud config list project +---- + +Now run the `init` command inside `terraform` directory to download the provider: + +[source,bash] +---- +$ terraform init +---- + +== Bring Infrastructure to a Desired State + +Once we described a desired state of the infrastructure (in our case it's a running VM), let's use Terraform to bring the infrastructure to this state: + +[source,bash] +---- +$ terraform apply +---- + +After Terraform ran successfully, use a gcloud command to verify that the machine was indeed launched: + +[source,bash] +---- +$ gcloud compute instances describe node-svc +---- + +== Deploy Application + +We did provisioning via Terraform, but we still need to install and start our application. +Let's do this remotely this time, instead of logging into the machine: + +[source,bash] +---- +$ INSTANCE_IP=$(gcloud --format="value(networkInterfaces[0].accessConfigs[0].natIP)" compute instances describe node-svc) # get IP of VM +$ scp -r ../03-script/install.sh node-user@${INSTANCE_IP}:/home/node-user # copy install script +$ rsh ${INSTANCE_IP} -l node-user chmod +x /home/node-user/install.sh # set permissions +$ rsh ${INSTANCE_IP} -l node-user /home/node-user/install.sh # install app +$ rsh ${INSTANCE_IP} -l node-user sudo nodejs /home/node-user/node-svc-v1/server.js & # run app +---- + +NOTE: If you get an offending ECDSA key error, use the suggested removal command. + +NOTE: If you get the error `Permission denied (publickey).`, this probably means that your ssh-agent no longer has the node-user private key added. +This easily happens if the Google Cloud Shell goes to sleep and wipes out your session. +Check via issuing `ssh-add -l`. +You should see something like `2048 SHA256:bII5VsQY3fCWXEai0lUeChEYPaagMXun3nB9U2eoUEM /home/betz4871/.ssh/node-user (RSA)`. +If you do not, re-issue the command `ssh-add ~/.ssh/node-user` and re-confirm with `ssh-add -l`. + +Connect to the VM via SSH: + +[source,bash] +---- +$ ssh node-user@${INSTANCE_IP} +---- + +Check that servce is running, and then exit: + +[source,bash] +---- +node-user@node-svc:~$ curl localhost:3000 +Successful request. +node-user@node-svc:~$ exit +---- + +== Access the Application Externally7 + +Manually create the firewall rule: + +[source,bash] +---- +$ gcloud compute firewall-rules create allow-node-svc-tcp-3000 \ + --network default \ + --action allow \ + --direction ingress \ + --rules tcp:3000 \ + --source-ranges 0.0.0.0/0 +---- + +Open another terminal and run the following command to get a public IP of the VM: + +[source,bash] +---- +$ gcloud --format="value(networkInterfaces[0].accessConfigs[0].natIP)" compute instances describe node-svc +---- + +Access the application in your browser by its public IP (don't forget to specify the port 3000). + +== Add other GCP resources into Terraform + +Let's add ssh keys and the firewall rule into our Terraform configuration so that we know for sure those resources are present. + +First, delete the SSH project key and firewall rule: + +[source,bash] +---- +$ gcloud compute project-info remove-metadata --keys=ssh-keys +$ gcloud compute firewall-rules delete allow-node-svc-tcp-3000 +---- + +Make sure that your application became inaccessible via port 3000 and SSH connection with a private key of `node-user` fails. + +Then add appropriate resources into `main.tf` file. +Your final version of `main.tf` file should look similar to this (change the ssh key file path, if necessary): + +[source,bash] +---- +resource "google_compute_instance" "node-svc" { + name = "node-svc" + machine_type = "f1-micro" + zone = "us-central1-c" + + # boot disk specifications + boot_disk { + initialize_params { + image = "node-svc-base" // use image built with Packer + } + } + + # networks to attach to the VM + network_interface { + network = "default" + access_config {} // use ephemaral public IP + } +} + +resource "google_compute_project_metadata" "node-svc" { + metadata = { + ssh-keys = "node-user:${file("~/.ssh/node-user.pub")}" // path to ssh key file + } +} + +resource "google_compute_firewall" "node-svc" { + name = "allow-node-svc-tcp-3000" + network = "default" + allow { + protocol = "tcp" + ports = ["3000"] + } + source_ranges = ["0.0.0.0/0"] +} +---- + +Tell Terraform to apply the changes to bring the actual infrastructure state to the desired state we described: + +[source,bash] +---- +$ terraform apply +---- + +Using the same techniques as above, verify that the application became accessible again on port 3000 (locally and remotely) and SSH connection with a private key works. +Here's a new way to check it from the Google Cloud Shell (you don't ssh into the VM): + +[source,bash] +---- +$ curl $INSTANCE_IP:3000 +---- + +== Create an output variable + +We have frequntly used this gcloud command to retrieve a public IP address of a VM: + +[source,bash] +---- +$ gcloud --format="value(networkInterfaces[0].accessConfigs[0].natIP)" compute instances describe node-svc +---- + +We can tell Terraform to provide us this information using https://www.terraform.io/intro/getting-started/outputs.html[output variables]. + +Create another configuration file inside `terraform` directory and call it `outputs.tf`. +Put the following content in it: + +[source,json] +---- +output "node_svc_public_ip" { + value = "${google_compute_instance.node-svc.network_interface.0.access_config.0.nat_ip}" +} +---- + +Run terraform apply again, this time with auto approve: + +[source,bash] +---- +$ terraform apply -auto-approve + +google_compute_instance.node-svc: Refreshing state... [id=node-svc] +google_compute_firewall.node-svc: Refreshing state... [id=allow-node-svc-tcp-3000] +google_compute_project_metadata.node-svc: Refreshing state... [id=proven-sum-252123] +Apply complete! Resources: 0 added, 0 changed, 0 destroyed. +Outputs: +node_svc_public_ip = 34.71.90.74 +---- + +Couple of things to notice here. +First, we did not destroy anything, so terraform refreshes - it confirms that configurations are still as specified. +During this Terraform run, no resources have been created or changed, which means that the actual state of our infrastructure already meets the requirements of a desired state. + +Secondly, under "Outputs:", you should see the public IP of the VM we created. + +== Save and commit the work + +Save and commit the `05-terraform` folder created in this lab into your `iac-tutorial` repo. + +== Conclusion + +In this lab, you saw a state of the art the application of Infrastructure as Code practice. + +We used _code_ (Terraform configuration syntax) to describe the _desired state_ of the infrastructure. +Then we told Terraform to bring the actual state of the infrastructure to the desired state we described. + +With this approach, Terraform configuration becomes _a single source of truth_ about the current state of your infrastructure. +Moreover, the infrastructure is described as code, so we can apply to it the same practices we commonly use in development such as keeping the code in source control, use peer reviews for making changes, etc. + +All of this helps us get control over even the most complex infrastructure. + +Destroy the resources created by Terraform and move on to the next lab. + +[source,bash] +---- +$ terraform destroy -auto-approve +---- + +Next: xref:06-ansible.adoc[Ansible] diff --git a/docs/05-terraform.md b/docs/05-terraform.md deleted file mode 100644 index 5cb121f..0000000 --- a/docs/05-terraform.md +++ /dev/null @@ -1,277 +0,0 @@ -# Terraform - -In the previous lab, you used Packer to make your system configuration faster and more reliable. But we still have a lot to improve. - -In this lab, we're going to learn about another IaC tool by HashiCorp called [Terraform](https://www.terraform.io/). - -## Intro - -Think about your current operations... - -Do you see any problems you may have, or any ways for improvement? - -Remember, that each time we want to deploy an application, we have to `provision` compute resources first, that is to start a new VM. - -We do it via a `gcloud` command like this: - -```bash -$ gcloud compute instances create raddit-instance-4 \ - --image-family raddit-base \ - --boot-disk-size 10GB \ - --machine-type n1-standard-1 -``` - -At this stage, it doesn't seem like there are any problems with this. But, in fact, there is. - -Infrastructure for running your services and applications could be huge. You might have tens, hundreds or even thousands of virtual machines, hundreds of firewall rules, multiples VPC networks, and load balancers. In addition to that, the infrastructure could be split between multiple teams and managed separately. Such infrastructure looks very complex and yet should be run and managed in a consistent and predictable way. - -If we create and change infrastructure components using gcloud CLI tool or Web UI Console, over time we won't be able to describe exactly in which `state` our infrastructure is in right now, meaning `we lose control over it`. - -This happens because you tend to forget what changes you've made a few months ago and why you did it. If multiple people are managing infrastructure, this makes things even worse, because you can't know what changes other people are making even though your communication inside the team could be great. - -So we see here 2 clear problems: - -* we don't know the current state of our infrastructure -* we can't control the changes - -The second problem is dealt by source control tools like `git`, while the first one is solved by using tools like Terraform. Let's find out how. - -## Install Terraform - -[Download](https://www.terraform.io/downloads.html) and install Terraform on your system. - -Make sure Terraform version is => 0.11.0: - -```bash -$ terraform -v -``` - -## Infrastructure as Code project - -Create a new directory called `terraform` inside your `iac-tutorial` repo, which we'll use to save the work done in this lab. - -## Describe VM instance - -_Terraform allows you to describe the desired state of your infrastructure and makes sure your desired state meets the actual state._ - -Terraform uses **resources** to describe different infrastructure components. If you want to use Terraform to manage some infrastructure component, you should first make sure there is a resource for that component for that particular platform. - -Let's use Terraform syntax to describe a VM instance that we want to be running. - -Create a Terraform configuration file called `main.tf` inside the `terraform` directory with the following content: - -``` -resource "google_compute_instance" "raddit" { - name = "raddit-instance" - machine_type = "n1-standard-1" - zone = "europe-west1-b" - - # boot disk specifications - boot_disk { - initialize_params { - image = "raddit-base" // use image built with Packer - } - } - - # networks to attach to the VM - network_interface { - network = "default" - access_config {} // use ephemeral public IP - } -} -``` - -Here we use [google_compute_instance](https://www.terraform.io/docs/providers/google/r/compute_instance.html) resource to manage a VM instance running in Google Cloud Platform. - -## Define Resource Provider - -One of the advantages of Terraform over other alternatives like [CloudFormation](https://aws.amazon.com/cloudformation/?nc1=h_ls) is that it's `cloud-agnostic`, meaning it can work with many different cloud providers like AWS, GCP, Azure, or OpenStack. It can also work with resources of different services like databases (e.g., PostgreSQL, MySQL), orchestrators (Kubernetes, Nomad) and [others](https://www.terraform.io/docs/providers/). - -This means that Terraform has a pluggable architecture and the pluggable component that allows it to work with a specific platform or service is called **provider**. - -So before we can actually create a VM using Terraform, we need to define a configuration of a [google cloud provider](https://www.terraform.io/docs/providers/google/index.html) and download it on our system. - -Create another file inside `terraform` folder and call it `providers.tf`. Put provider configuration in it: - -``` -provider "google" { - version = "~> 1.4.0" - project = "infrastructure-as-code" - region = "europe-west1" -} -``` - -Note the `region` value, this is where terraform will provision resources (you may wish to change it). - -Make sure to change the `project` value in provider's configuration above to your project's ID. You can get your default project's ID by running the command: - -```bash -$ gcloud config list project -``` - -Now run the `init` command inside `terraform` directory to download the provider: - -```bash -$ cd ./terraform -$ terraform init -``` - -## Bring Infrastructure to a Desired State - -Once we described a desired state of the infrastructure (in our case it's a running VM), let's use Terraform to bring the infrastructure to this state: - -```bash -$ terraform apply -``` - -After Terraform ran successfully, use a gcloud command to verify that the machine was indeed launched: - -```bash -$ gcloud compute instances describe raddit-instance -``` - -## Deploy Application - -We did provisioning via Terraform, but we still need to run a script to deploy our application. - -Copy `deploy.sh` script to the created VM: - -```bash -$ INSTANCE_IP=$(gcloud --format="value(networkInterfaces[0].accessConfigs[0].natIP)" compute instances describe raddit-instance) -$ scp ../scripts/deploy.sh raddit-user@${INSTANCE_IP}:/home/raddit-user -``` - -Connect to the VM via SSH: - -```bash -$ ssh raddit-user@${INSTANCE_IP} -``` - -Run deployment script: - -```bash -$ chmod +x ./deploy.sh -$ ./deploy.sh -``` - -## Access the Application - -Access the application in your browser by its public IP (don't forget to specify the port 9292). - -Open another terminal and run the following command to get a public IP of the VM: - -```bash -$ gcloud --format="value(networkInterfaces[0].accessConfigs[0].natIP)" compute instances describe raddit-instance -``` - -## Add other GCP resources into Terraform - -Do you remember how in previous labs we created some GCP resources like SSH project keys and a firewall rule for our application via `gcloud` tool? - -Let's add those into our Terraform configuration so that we know for sure those resources are present. - -First, delete the SSH project key and firewall rule: - -```bash -$ gcloud compute project-info remove-metadata --keys=ssh-keys -$ gcloud compute firewall-rules delete allow-raddit-tcp-9292 -``` - -Make sure that your application became inaccessible via port 9292 and SSH connection with a private key of `raddit-user` fails. - -Then add appropriate resources into `main.tf` file. Your final version of `main.tf` file should look similar to this (change the ssh key file path, if necessary): - -``` -resource "google_compute_instance" "raddit" { - name = "raddit-instance" - machine_type = "n1-standard-1" - zone = "europe-west1-b" - - # boot disk specifications - boot_disk { - initialize_params { - image = "raddit-base" // use image built with Packer - } - } - - # networks to attach to the VM - network_interface { - network = "default" - access_config {} // use ephemaral public IP - } -} - -resource "google_compute_project_metadata" "raddit" { - metadata { - ssh-keys = "raddit-user:${file("~/.ssh/raddit-user.pub")}" // path to ssh key file - } -} - -resource "google_compute_firewall" "raddit" { - name = "allow-raddit-tcp-9292" - network = "default" - allow { - protocol = "tcp" - ports = ["9292"] - } - source_ranges = ["0.0.0.0/0"] -} -``` - -Tell Terraform to apply the changes to bring the actual infrastructure state to the desired state we described: - -```bash -$ terraform apply -``` - -Verify that the application became accessible again on port 9292 and SSH connection with a private key works. - -## Create an output variable - -Remember how we often had to use a gcloud command like this to retrive a public IP address of a VM? - -```bash -$ gcloud --format="value(networkInterfaces[0].accessConfigs[0].natIP)" compute instances describe raddit-instance -``` - -We can tell Terraform to provide us this information using [output variables](https://www.terraform.io/intro/getting-started/outputs.html). - -Create another configuration file inside `terraform` directory and call it `outputs.tf`. Put the following content in it: - -``` -output "raddit_public_ip" { - value = "${google_compute_instance.raddit.network_interface.0.access_config.0.assigned_nat_ip}" -} -``` - -Run terraform apply again: - -```bash -$ terraform apply -``` - -You should see the public IP of the VM we created. - -Also note, that during this Terraform run, no resources have been created or changed, which means that the actual state of our infrastructure already meets the requirements of a desired state. - -## Save and commit the work - -Save and commit the `terraform` folder created in this lab into your `iac-tutorial` repo. - -## Conclusion - -In this lab, you saw in its most obvious way the application of Infrastructure as Code practice. - -We used `code` (Terraform configuration syntax) to describe the `desired state` of the infrastructure. Then we told Terraform to bring the actual state of the infrastructure to the desired state we described. - -With this approach, Terraform configuration becomes `a single source of truth` about the current state of your infrastructure. Moreover, the infrastructure is described as code, so we can apply to it the same practices we commonly use in development such as keeping the code in source control, use peer reviews for making changes, etc. - -All of this helps us get control over even the most complex infrastructure. - -Destroy the resources created by Terraform and move on to the next lab. - -```bash -$ terraform destroy -``` - -Next: [Ansible](06-ansible.md) diff --git a/docs/06-ansible.adoc b/docs/06-ansible.adoc new file mode 100644 index 0000000..1564552 --- /dev/null +++ b/docs/06-ansible.adoc @@ -0,0 +1,300 @@ += Ansible + +In the previous lab, you used Terraform to implement Infrastructure as Code approach to managing the cloud infrastructure resources. +There is another major type of tooling we need to consider, and that is *Configuration Management* (CM) tools. + +When talking about CM tools, we can often meet the acronym `CAPS` which stands for Chef, Ansible, Puppet and Saltstack - the most known and commonly used CM tools. +In this lab, we're going to look at Ansible and see how CM tools can help us improve our operations. + +== Intro + +If you think about our current operations and what else there is to improve, you will probably see the potential problem in the deployment process. + +The way we do deployment right now is by connecting via SSH to a VM and running a deployment script. +And the problem here is not the connecting via SSH part, but running a script. + +_Scripts are bad at long term management of system configuration, because they make common system configuration operations complex and error-prone._ + +When you write a script, you use a scripting language syntax (Bash, Python) to write commands which you think should change the system's configuration. +And the problem is that there are too many ways people can write the code that is meant to do the same things, which is the reason why scripts are often difficult to read and understand. +Besides, there are various choices as to what language to use for a script: should you write it in Ruby which your colleagues know very well or Bash which you know better? + +Common configuration management operations are well-known: copy a file to a remote machine, create a folder, start/stop/enable a process, install packages, etc. +So _we need a tool that would implement these common operations in a well-known and tested way, providing us with a clean and understandable syntax for using them_. +This way we wouldn't have to write complex scripts ourselves each time for the same tasks, possibly making mistakes along the way, but instead just tell the tool what should be done: what packages should be present, what processes should be started, etc. + +This is exactly what CM tools do. +So let's check it out using Ansible as an example. + +== Install Ansible + +NOTE: this lab assumes Ansible v2.4 is installed. +It may not work as expected with other versions as things change quickly. + +Issue the following commands in the Google cloud shell (note that Ansible will not remain installed when your shell goes to sleep): + +[source,bash] +---- +$ sudo apt update +$ sudo apt install software-properties-common +$ sudo apt-add-repository --yes --update ppa:ansible/ansible +$ sudo apt install -y ansible +---- + +If you have issues, reference the instructions on how to install Ansible on your system from http://docs.ansible.com/ansible/latest/intro_installation.html[official documentation]. + +Verify that Ansible was installed by checking the version: + +[source,bash] +---- +$ ansible --version +---- + +== Infrastructure as Code project + +Create a new directory called `06-ansible` inside your `iac-tutorial` repo, which we'll use to save the work done in this lab. + +== Provision compute resources + +Start a VM and create other GCP resources for running your application applying Terraform configuration you wrote in the previous lab (destroy first if you have some still running): + +[source,bash] +---- +$ cd ./05-terraform # adapt this command as necessary to get to the directory +$ terraform apply -auto-approve +---- + +== Deploy playbook + +We'll rewrite our Bash script used for deployment using Ansible syntax. + +Ansible uses *tasks* to define commands used for system configuration. +Each Ansible task basically corresponds to one command in our Bash script. + +Each task uses some *module* to perform a certain operation on the configured system. +Modules are well tested functions which are meant to perform common system configuration operations. + +Let's look at our `install.sh` first to see what modules we might need to use: + +[source,bash] +---- +#!/bin/bash +set -e # exit immediately if anything returns non-zero. See https://www.javatpoint.com/linux-set-command + + +echo " ----- download, initialize, and run app ----- " +git clone https://github.com/dm-academy/node-svc-v1 +cd node-svc-v1 +git checkout 02 +npm install +npm install express +---- + +We clearly see here several types of operations: cloning a git repo and setting the branch, initializing npm, and installing express (a Node package). + +We also, to start the service, need to run this command: + +`$ sudo nodejs /home/node-user/node-svc-v1/server.js &` + +So we'll search for Ansible modules that allow to perform these operations. +Luckily, there are modules for all of these operations. + +Ansible uses YAML syntax to define tasks, which makes the configuration readable. + +Let's create a file called `deploy.yml` ("deploy" including both installation and launching) inside the `ansible` directory: + +[source,yaml] +---- +--- +- name: Deploy node-svc App + hosts: node-svc + tasks: + - name: Fetch the latest version of application code + # see https://docs.ansible.com/ansible/latest/modules/git_module.html + git: + repo: 'https://github.com/dm-academy/node-svc-v1' + dest: /home/node-user/node-svc-1 + version: "02" + register: clone + + - name: NPM install express and initialize app + # see https://docs.ansible.com/ansible/latest/modules/npm_module.html + npm: + name: express + global: yes + + - name: Install packages based on package.json. + npm: + path: /home/node-user/node-svc-1 + + - name: Start the nodejs server + # see https://codelike.pro/deploy-nodejs-app-with-ansible-git-pm2/ + sudo_user: node-user + command: pm2 start server.js --name node-app chdir=/home/node-user/node-svc-1s + ignore_errors: yes + when: npm_finished.changed +---- + +In this configuration file, which is called a *playbook* in Ansible terminology, we define several tasks. + +The `name` that precedes each task is used as a comment that will show up in the terminal when the task starts to run. + +`register` option allows to capture the result output from running a task. + +The `first task` uses git module to pull the code from GitHub. + +[source,yaml] +---- +- name: Fetch the latest version of application code + # see https://docs.ansible.com/ansible/latest/modules/git_module.html + git: + repo: 'https://github.com/dm-academy/node-svc-v1' + dest: /home/node-user/node-svc-1 + version: 02 + register: git_finished +---- + +The second task installs the npm package express and initializes the app in the specified directory: + +[source,yaml] +---- + + - name: NPM install express and initialize app + # see https://docs.ansible.com/ansible/latest/modules/npm_module.html + npm: + name: coffee-script + global: yes + + - name: Install packages based on package.json. + npm: + path: /home/node-user/node-svc-1 +---- + +The third task runs the server: + +[source,yaml] +---- + - name: Start the nodejs server + # see https://codelike.pro/deploy-nodejs-app-with-ansible-git-pm2/ + sudo_user: node-user + command: pm2 start server.js --name node-app chdir=/home/node-user/node-svc-1s + ignore_errors: yes + when: npm_finished.changed +---- + +Note, how for each module we use a different set of module options. +You can find full information about the options in a module's documentation. + +In the second task, we use a conditional statement http://docs.ansible.com/ansible/latest/playbooks_conditionals.html#the-when-statement[when] to make sure the `npm install` task is only run when the local repo was updated, i.e. +the output from running git clone command was changed. +This allows us to save some time spent on system configuration by not running unnecessary commands. + +On the same level as tasks, we also define a *handlers* block. +Handlers are special tasks which are run only in response to notification events from other tasks. +In our case, `node-svc` service gets restarted only when the `npm install` task is run. + +== Inventory file + +The way that Ansible works is simple: it connects to a remote VM (usually via SSH) and runs the commands that stand behind each module you used in your playbook. + +To be able to connect to a remote VM, Ansible needs information like IP address and credentials. +This information is defined in a special file called http://docs.ansible.com/ansible/latest/intro_inventory.html[inventory]. + +Create a file called `hosts.yml` inside `ansible` directory with the following content (make sure to change the `ansible_host` parameter to public IP of your VM): + +[source,yaml] +---- +node-svc: + hosts: + node-svc-01: + ansible_host: 35.35.35.35 + ansible_user: node-user +---- + +Here we define a group of hosts (`node-svc`) under which we list the hosts that belong to this group. +In this case, we list only one host under the hosts group and give it a name (`node-svc-01`) and information on how to connect to the host. + +Now note, that inside our `deploy.yml` playbook we specified `node-svc` host group in the `hosts` option before the tasks: + +[source,yaml] +---- +--- +- name: Deploy node-svc app + hosts: node-svc-01 + tasks: + ... +---- + +This will tell Ansible to run the following tasks on the hosts defined in hosts group `raddit-app`. + +== Ansible configuration + +Before we can run a deployment, we need to make some configuration changes to how Ansible views and manages our `ansible` directory. + +Let's define custom Ansible configuration for our directory. +Create a file called `ansible.cfg` inside the `ansible` directory with the following content: + +[source,ini] +---- +[defaults] +inventory = ./hosts.yml +private_key_file = ~/.ssh/node-user +host_key_checking = False +---- + +This custom configuration will tell Ansible what inventory file to use, what private key file to use for SSH connection and to skip the host checking key procedure. + +== Run playbook + +Now it's time to run your playbook and see how it works. + +Use the following commands to start a deployment: + +[source,bash] +---- +$ cd ./06-ansible +$ ansible-playbook deploy.yml +---- + +== Access Application + +Access the application in your browser by its public IP (don't forget to specify the port 3000) and make sure application has been deployed and is functional. + +== Futher Learning Ansible + +There's a whole lot to learn about Ansible. +Try playing around with it more and create a `playbook` which provides the same system configuration as your `configuration.sh` script. +Save it under the name `configuration.yml` inside the `ansible` folder, then use it inside https://www.packer.io/docs/provisioners/ansible.html[ansible provisioner] instead of shell in your Packer template. + +You can find an example of `configuration.yml` playbook https://github.com/Artemmkin/infrastructure-as-code-example/blob/master/ansible/configuration.yml[here]. + +And https://github.com/Artemmkin/infrastructure-as-code-example/blob/master/packer/raddit-base-image-ansible.json[here] is an example of a Packer template which uses ansible provisioner. + +== Save and commit the work + +Save and commit the `ansible` folder created in this lab into your `iac-tutorial` repo. + +== Idempotence + +One more advantage of CM tools over scripts is that commands they implement designed to be *idempotent* by default. + +Idempotence in this case means that even if you apply the same configuration changes multiple times the result will stay the same. + +This is important because some commands that you use in scripts may not produce the same results when run more than once. +So we always want to achieve idempotence for our configuration management system, sometimes applying conditionals statements as we did in this lab. + +== Conclusion + +Ansible provided us with a clean YAML syntax for performing common system configuration tasks. +This allowed us to get rid of our own implementation of configuration commands. + +It might not seem like a big improvement at this scale, because our deploy script is small, but it definitely brings order to system configuration management and is more noticeable at medium and large scale. + +Destroy the resources created by Terraform. + +[source,bash] +---- +$ terraform destroy +---- + +Next: xref:07-vagrant.adoc[Vagrant] diff --git a/docs/06-ansible.md b/docs/06-ansible.md deleted file mode 100644 index 726bf2d..0000000 --- a/docs/06-ansible.md +++ /dev/null @@ -1,233 +0,0 @@ -# Ansible - -In the previous lab, you used Terraform to implement Infrastructure as Code approach to managing the cloud infrastructure resources. Yet, we have another type of tooling to discover and that is **Configuration Management** (CM) tools. - -When talking about CM tools, we can often meet the acronym `CAPS` which stands for Chef, Ansible, Puppet and Saltstack - the most known and commonly used CM tools. In this lab, we're going to look at Ansible and see how CM tools can help us improve our operations. - -## Intro - -If you think about our current operations and what else there is to improve, you will probably see the potential problem in the deployment process. - -The way we do deployment right now is by connecting via SSH to a VM and running a deployment script. And the problem here is not the connecting via SSH part, but running a script. - -_Scripts are bad at long term management of system configuration, because they make common system configuration operations complex and error-prone._ - -When you write a script, you use a scripting language syntax (Bash, Python) to write commands which you think should change the system's configuration. And the problem is that there are too many ways people can write the code that is meant to do the same things, which is the reason why scripts are often difficult to read and understand. Besides, there are various choices as to what language to use for a script: should you write it in Ruby which your colleagues know very well or Bash which you know better? - -Common configuration management operations are well-known: copy a file to a remote machine, create a folder, start/stop/enable a process, install packages, etc. So _we need a tool that would implement these common operations in a well-known tested way and provide us with a clean and understandable syntax for using them_. This way we wouldn't have to write complex scripts ourselves each time for the same tasks, possibly making mistakes along the way, but instead just tell the tool what should be done: what packages should be present, what processes should be started, etc. - -This is exactly what CM tools do. So let's check it out using Ansible as an example. - -## Install Ansible - -NOTE: this lab assumes Ansible v2.4 is installed. It may not work as expected with other versions as things change quickly. - -You can follow the instructions on how to install Ansible on your system from [official documentation](http://docs.ansible.com/ansible/latest/intro_installation.html). - -I personally prefer installing it via [pip](http://docs.ansible.com/ansible/latest/intro_installation.html#latest-releases-via-pip) on my Linux machine. - -Verify that Ansible was installed by checking the version: - -```bash -$ ansible --version -``` - -## Infrastructure as Code project - -Create a new directory called `ansible` inside your `iac-tutorial` repo, which we'll use to save the work done in this lab. - -## Provision compute resources - -Start a VM and create other GCP resources for running your application applying Terraform configuration you wrote in the previous lab: - -```bash -$ cd ./terraform -$ terraform apply -``` - -## Deploy playbook - -We'll rewrite our Bash script used for deployment using Ansible syntax. - -Ansible uses **tasks** to define commands used for system configuration. Each Ansible task basically corresponds to one command in our Bash script. - -Each task uses some **module** to perform a certain operation on the configured system. Modules are well tested functions which are meant to perform common system configuration operations. - -Let's look at our `deploy.sh` script first to see what modules we might need to use: - -```bash -#!/bin/bash -set -e - -echo " ----- clone application repository ----- " -git clone https://github.com/Artemmkin/raddit.git - -echo " ----- install dependent gems ----- " -cd ./raddit -sudo bundle install - -echo " ----- start the application ----- " -sudo systemctl start raddit -sudo systemctl enable raddit -``` - -We clearly see here 3 different types of operations: cloning a git repo, installing gems via Bundler, and managing a service via systemd. - -So we'll search for Ansible modules that allow to perform these operations. Luckily, there are modules for all of these operations. - -Ansible uses YAML syntax to define tasks, which makes the configuration looks clean. - -Let's create a file called `deploy.yml` inside the `ansible` directory: - -```yaml ---- -- name: Deploy Raddit App - hosts: raddit-app - tasks: - - name: Fetch the latest version of application code - git: - repo: 'https://github.com/Artemmkin/raddit.git' - dest: /home/raddit-user/raddit - register: clone - - - name: Install application dependencies - become: true - bundler: - state: present - chdir: /home/raddit-user/raddit - when: clone.changed - notify: restart raddit - - handlers: - - name: restart raddit - become: true - systemd: name=raddit state=restarted -``` - -In this configuration file, which is called a **playbook** in Ansible terminology, we define 3 tasks: - -The `first task` uses git module to pull the code from GitHub. - -```yaml -- name: Fetch the latest version of application code - git: - repo: 'https://github.com/Artemmkin/raddit.git' - dest: /home/raddit-user/raddit - register: clone -``` - -The `name` that precedes each task is used as a comment that will show up in the terminal when the task starts to run. - -`register` option allows to capture the result output from running a task. We will use it later in a conditional statement for running a `bundle install` task. - -The second task runs bundler in the specified directory: - -```yaml -- name: Install application dependencies - become: true - bundler: - state: present - chdir: /home/raddit-user/raddit - when: clone.changed - notify: restart raddit -``` - -Note, how for each module we use a different set of module options (in this case `state` and `chdir`). You can find full information about the options in a module's documentation. - -In the second task, we use a conditional statement [when](http://docs.ansible.com/ansible/latest/playbooks_conditionals.html#the-when-statement) to make sure the `bundle install` task is only run when the local repo was updated, i.e. the output from running git clone command was changed. This allows us to save some time spent on system configuration by not running unnecessary commands. - -On the same level as tasks, we also define a **handlers** block. Handlers are special tasks which are run only in response to notification events from other tasks. In our case, `raddit` service gets restarted only when the `bundle install` task is run. - -## Inventory file - -The way that Ansible works is simple: it connects to a remote VM (usually via SSH) and runs the commands that stand behind each module you used in your playbook. - -To be able to connect to a remote VM, Ansible needs information like IP address and credentials. This information is defined in a special file called [inventory](http://docs.ansible.com/ansible/latest/intro_inventory.html). - -Create a file called `hosts.yml` inside `ansible` directory with the following content (make sure to change the `ansible_host` parameter to public IP of your VM): - -```yaml -raddit-app: - hosts: - raddit-instance: - ansible_host: 35.35.35.35 - ansible_user: raddit-user -``` - -Here we define a group of hosts (`raddit-app`) under which we list the hosts that belong to this group. In this case, we list only one host under the hosts group and give it a name (`raddit-instance`) and information on how to connect to the host. - -Now note, that inside our `deploy.yml` playbook we specified `raddit-app` host group in the `hosts` option before the tasks: - -```yaml ---- -- name: Deploy Raddit App - hosts: raddit-app - tasks: - ... -``` - -This will tell Ansible to run the following tasks on the hosts defined in hosts group `raddit-app`. - -## Ansible configuration - -Before we can run a deployment, we need to make some configuration changes to how Ansible views and manages our `ansible` directory. - -Let's define custom Ansible configuration for our directory. Create a file called `ansible.cfg` inside the `ansible` directory with the following content: - -```ini -[defaults] -inventory = ./hosts.yml -private_key_file = ~/.ssh/raddit-user -host_key_checking = False -``` - -This custom configuration will tell Ansible what inventory file to use, what private key file to use for SSH connection and to skip the host checking key procedure. - -## Run playbook - -Now it's time to run your playbook and see how it works. - -Use the following commands to start a deployment: - -```bash -$ cd ./ansible -$ ansible-playbook deploy.yml -``` - -## Access Application - -Access the application in your browser by its public IP (don't forget to specify the port 9292) and make sure application has been deployed and is functional. - -## Futher Learning Ansible - -There's a whole lot to learn about Ansible. Try playing around with it more and create a `playbook` which provides the same system configuration as your `configuration.sh` script. Save it under the name `configuration.yml` inside the `ansible` folder, then use it inside [ansible provisioner](https://www.packer.io/docs/provisioners/ansible.html) instead of shell in your Packer template. - -You can find an example of `configuration.yml` playbook [here](https://github.com/Artemmkin/infrastructure-as-code-example/blob/master/ansible/configuration.yml). - -And [here](https://github.com/Artemmkin/infrastructure-as-code-example/blob/master/packer/raddit-base-image-ansible.json) is an example of a Packer template which uses ansible provisioner. - -## Save and commit the work - -Save and commit the `ansible` folder created in this lab into your `iac-tutorial` repo. - -## Idempotence - -One more advantage of CM tools over scripts is that commands they implement designed to be **idempotent** by default. - -Idempotence in this case means that even if you apply the same configuration changes multiple times the result will stay the same. - -This is important because some commands that you use in scripts may not produce the same results when run more than once. So we always want to achieve idempotence for our configuration management system, sometimes applying conditionals statements as we did in this lab. - -## Conclusion - -Ansible provided us with a clean YAML syntax for performing common system configuration tasks. This allowed us to get rid of our own implementation of configuration commands. - -It might not seem like a big improvement at this scale, because our deploy script is small, but it definitely brings order to system configuration management and is more noticeable at medium and large scale. - -Destroy the resources created by Terraform. - -```bash -$ terraform destroy -``` - -Next: [Vagrant](07-vagrant.md) diff --git a/docs/07-vagrant.md b/docs/07-vagrant.adoc similarity index 62% rename from docs/07-vagrant.md rename to docs/07-vagrant.adoc index 70a6d5d..7a58c48 100644 --- a/docs/07-vagrant.md +++ b/docs/07-vagrant.adoc @@ -1,20 +1,34 @@ -## Vagrant +== Vagrant (OPTIONAL, for local laptops/workstations only) -In this lab, we're going to learn about [Vagrant](https://www.vagrantup.com/) which is another tool that implements IaC approach and is often used for creating development environments. +In this lab, we're going to learn about https://www.vagrantup.com/[Vagrant] which is another tool that implements IaC approach and is often used for creating development environments. -## Intro +_This lab is optional for those of you want to learn Vagrant on your personal laptops and workstations. +It will not work on GCE virtual machines, university workstations, or Google Cloud Shell. +The alternative to local development is to create a local development VM in the cloud. +You may skip to xref:08-docker.adoc[Docker]_. -Before this lab, our main focus was on how to create and manage an environment where our application runs and is accessible to the public. Let's call that environment `production` for the sake of simplicity of referring to that later. +== Intro -But what is about our local environment where we develop the code? Are there any problems with that? +Before this lab, our main focus was on how to create and manage an environment where our application runs and is accessible to the public. +Let's call that environment `production` for the sake of simplicity of referring to that later. + +But what is about our local environment where we develop the code? +Are there any problems with that? Running our application locally would require us installing all of its dependencies and configuring the local system pretty much the same way as we did in the previous labs. There are a few reasons why you don't want to do that: -* `This can break your system`. When you change your system configuration there are lot of things that can go wrong. For example, when installing/removing different packages you can easily mess up the work of your system's package manager. -* `When something breaks in your system configuration, it can take a long time to fix`. If you've messed up with you local system configuration, you either need to debug or reinstall your OS. Both of these can take a lot of your time and should be avoided. -* `You have no idea what is your development environment actually looks like`. Your local OS will certainly have its own specific configuration and packages installed, because you use it for every day tasks different than just running your application. For this reason, even if your application works on your local machine, you cannot describe exactly what is required for it to run. This is commonly known as the `works on my machine` problem and is often one of the reasons for a conflict between Dev and Ops. +* `This can break your system`. +When you change your system configuration there are lot of things that can go wrong. +For example, when installing/removing different packages you can easily mess up the work of your system's package manager. +* `When something breaks in your system configuration, it can take a long time to fix`. +If you've messed up with you local system configuration, you either need to debug or reinstall your OS. +Both of these can take a lot of your time and should be avoided. +* `You have no idea what is your development environment actually looks like`. +Your local OS will certainly have its own specific configuration and packages installed, because you use it for every day tasks different than just running your application. +For this reason, even if your application works on your local machine, you cannot describe exactly what is required for it to run. +This is commonly known as the `works on my machine` problem and is often one of the reasons for a conflict between Dev and Ops. Based on these problems, let's draw some requirements for our local dev environment: @@ -22,33 +36,39 @@ Based on these problems, let's draw some requirements for our local dev environm * `Isolation from our local system.` This leaves us with choices of a local/remote VM or containers. * `Ability to quickly and easily recreate when it breaks.` -Vagrant is a tool that allows to meet all of these requirements. Let's find out how. +Vagrant is a tool that allows to meet all of these requirements. +Let's find out how. -## Install Vagrant and VirtualBox +== Install Vagrant and VirtualBox -NOTE: this lab assumes Vagrant `v2.0.1` is installed. It may not work as expected on other versions. +NOTE: this lab assumes Vagrant `v2.0.1` is installed. +It may not work as expected on other versions. -[Download](https://www.vagrantup.com/downloads.html) and install Vagrant on your system. +https://www.vagrantup.com/downloads.html[Download] and install Vagrant on your system. Verify that Vagrant was successfully installed by checking the version: -```bash +[source,bash] +---- $ vagrant -v -``` +---- -[Download](https://www.virtualbox.org/wiki/Downloads) and install VirtualBox for running virtual machines locally. +https://www.virtualbox.org/wiki/Downloads[Download] and install VirtualBox for running virtual machines locally. -Also, make sure virtualization feature is enabled for your CPU. You would need to check BIOS settings for this. +Also, make sure virtualization feature is enabled for your CPU. +You would need to check BIOS settings for this. -## Create a Vagrantfile +== Create a Vagrantfile -If we compare Vagrant to the previous tools we've already learned, it reminds Terraform. Like Terraform, Vagrant allows you to declaratively describe VMs you want to provision, but it focuses on managing VMs (and containers) exclusively, so it's no good for things like firewall rules or VPC networks in the cloud. +If we compare Vagrant to the previous tools we've already learned, it reminds Terraform. +Like Terraform, Vagrant allows you to declaratively describe VMs you want to provision, but it focuses on managing VMs (and containers) exclusively, so it's no good for things like firewall rules or VPC networks in the cloud. To start a local VM using Vagrant, we need to define its characteristics in a special file called `Vagrantfile`. Create a file named `Vagrantfile` inside `iac-tutorial` directory with the following content: -```ruby +[source,ruby] +---- Vagrant.configure("2") do |config| # define provider configuration config.vm.provider :virtualbox do |v| @@ -60,66 +80,75 @@ Vagrant.configure("2") do |config| app.vm.hostname = "raddit-app" end end -``` +---- -Vagrant, like Terraform, doesn't start VMs itself. It uses a `provider` component to communicate the instructions to the actual provider of infrastructure resources. +Vagrant, like Terraform, doesn't start VMs itself. +It uses a `provider` component to communicate the instructions to the actual provider of infrastructure resources. In this case, we redefine Vagrant's default provider (VirtualBox) configuration to allocate 1024 MB of memory to each VM defined in this Vagrantfile: -```ruby +[source,ruby] +---- # define provider configuration config.vm.provider :virtualbox do |v| v.memory = 1024 end -``` +---- -We also specify characteristics of a VM we want to launch: what machine image (`box`) to use (Vagrant downloads a box from [Vagrant Cloud](https://www.vagrantup.com/docs/vagrant-cloud/boxes/catalog.html)), and what hostname to assign to a started VM: +We also specify characteristics of a VM we want to launch: what machine image (`box`) to use (Vagrant downloads a box from https://www.vagrantup.com/docs/vagrant-cloud/boxes/catalog.html[Vagrant Cloud]), and what hostname to assign to a started VM: -```ruby +[source,ruby] +---- # define a VM machine configuration config.vm.define "raddit-app" do |app| app.vm.box = "ubuntu/xenial64" app.vm.hostname = "raddit-app" end -``` +---- -## Start a Local VM +== Start a Local VM With the Vagrantfile created, you can start a VM on your local machine using Ubuntu 16.04 image from Vagrant Cloud. Run the following command inside the folder with your Vagrantfile: -```bash +[source,bash] +---- $ vagrant up -``` +---- Check the current status of the VM: -```bash +[source,bash] +---- $ vagrant status -``` +---- You can connect to a started VM via SSH using the following command: -```bash +[source,bash] +---- $ vagrant ssh -``` +---- -## Configure Dev Environment +== Configure Dev Environment Now that you have a VM running on your local machine, you need to configure it to run your application: install ruby, mongodb, etc. -There are many ways you can do that, which are known to you by now. You can configure the environment manually, using scripts or some CM tool like Ansible. +There are many ways you can do that, which are known to you by now. +You can configure the environment manually, using scripts or some CM tool like Ansible. _It's best to use the same configuration and the same CM tools across all of your environments._ -As we've already discussed, your application may work in your local environment, but it may not work on a remote VM running in production environment, because of the differences in configuration. But when your configuration is the same across all of your environments, the application will not fail for reasons like a missing package and the system configuration can generally be excluded as a potential cause of a failure when it occurs. +As we've already discussed, your application may work in your local environment, but it may not work on a remote VM running in production environment, because of the differences in configuration. +But when your configuration is the same across all of your environments, the application will not fail for reasons like a missing package and the system configuration can generally be excluded as a potential cause of a failure when it occurs. Because we chose to use Ansible for configuring our production environment in the previous lab, let's use it for configuration management of our dev environment, too. Change your Vagrantfile to look like this: -```ruby +[source,ruby] +---- Vagrant.configure("2") do |config| # define provider configuration config.vm.provider :virtualbox do |v| @@ -139,125 +168,142 @@ Vagrant.configure("2") do |config| end end end -``` +---- We added Ansible provisioning to the Vagrantfile which allows us to run a playbook for system configuration. -```ruby +[source,ruby] +---- # system configuration is done by Ansible app.vm.provision "ansible" do |ansible| ansible.playbook = "ansible/configuration.yml" end -``` +---- -In the previous lab, it was given to you as a task to create a `configuration.yml` playbook that provides the same functionality as `configuration.sh` script we had used before. If you did not do that, you can copy the playbook from [here](https://github.com/Artemmkin/infrastructure-as-code-example/blob/master/ansible/configuration.yml) (place it inside `ansible` directory). If you did create your own playbook, make sure you have a `pre_tasks` section as in [this example](https://github.com/Artemmkin/infrastructure-as-code-example/blob/master/ansible/configuration.yml). +In the previous lab, it was given to you as a task to create a `configuration.yml` playbook that provides the same functionality as `configuration.sh` script we had used before. +If you did not do that, you can copy the playbook from https://github.com/Artemmkin/infrastructure-as-code-example/blob/master/ansible/configuration.yml[here] (place it inside `ansible` directory). +If you did create your own playbook, make sure you have a `pre_tasks` section as in https://github.com/Artemmkin/infrastructure-as-code-example/blob/master/ansible/configuration.yml[this example]. Note, that we also added a port forwarding rule for accessing our application and instructed Vagrant to sync a local folder with application code to a specified VM folder (`/srv/raddit-app`): -```ruby +[source,ruby] +---- # sync a local folder with application code to the VM folder app.vm.synced_folder "raddit-app/", "/srv/raddit-app" # use port forwarding make application accessible on localhost app.vm.network "forwarded_port", guest: 9292, host: 9292 -``` +---- Now run the following command to configure the local dev environment: -```bash +[source,bash] +---- $ vagrant provision -``` +---- Verify the configuration: -```bash +[source,bash] +---- $ vagrant ssh $ ruby -v $ bundle version $ sudo systemctl status mongod -``` +---- -## Run Application Locally +== Run Application Locally -As we mentioned, we gave Vagrant the instruction to sync our folder with application to a VM's folder under the specified path. This way we can develop the application on our host machine using our favorite code editor and then run that code inside the VM. +As we mentioned, we gave Vagrant the instruction to sync our folder with application to a VM's folder under the specified path. +This way we can develop the application on our host machine using our favorite code editor and then run that code inside the VM. We need to first reload a VM for chages in our Vagrantfile to take effect: -```bash +[source,bash] +---- $ vagrant reload -``` +---- Then connect to the VM to start application: -```bash +[source,bash] +---- $ vagrant ssh $ cd /srv/raddit-app $ sudo bundle install $ puma -``` +---- The application should be accessible to you now at the following URL: http://localhost:9292 Stop the application using `ctrl + C` keys. -## Mess Up Dev Environment +== Mess Up Dev Environment One of our requirements to local dev environment was that you can freely mess it up and recreate in no time. Let's try that. Delete Ruby on the VM: -```bash + +[source,bash] +---- $ vagrant ssh $ sudo apt-get -y purge ruby $ ruby -v -``` +---- Try to run your application again (it should fail): -```bash +[source,bash] +---- $ cd /srv/raddit-app $ puma -``` +---- -## Recreate Dev Environment +== Recreate Dev Environment Let's try to recreate our dev environment from scratch to see how big of a problem it will be. Run the following commands to destroy the current dev environment and create a new one: -```bash +[source,bash] +---- $ vagrant destroy -f $ vagrant up -``` +---- Once a new VM is up and running, try to launch your app in it: -```bash +[source,bash] +---- $ vagrant ssh $ ruby -v $ cd /srv/raddit-app $ sudo bundle install $ puma -``` +---- The Ruby package should be present and the application should run without problems. -Recreating a new dev environment was easy, took very little time and it didn't affect our host OS. That's exactly what we needed. +Recreating a new dev environment was easy, took very little time and it didn't affect our host OS. +That's exactly what we needed. -## Save and commit the work +== Save and commit the work Save and commit the Vagrantfile created in this lab into your `iac-tutorial` repo. -## Conclusion +== Conclusion -Vagrant was able to meet our requirements for dev environments. It makes creating/recreating and configuring a dev environment easy and safe for our host operating system. +Vagrant was able to meet our requirements for dev environments. +It makes creating/recreating and configuring a dev environment easy and safe for our host operating system. Because we describe our local infrastructure in code in a Vagrantfile, we keep it in source control and make sure all our other colleagues have the same environment for the application as we do. Destroy the VM: -```bash +[source,bash] +---- $ vagrant destroy -f -``` +---- -Next: [Docker](08-docker.md) +Next: xref:08-docker.adoc[Docker] diff --git a/docs/08-docker.adoc b/docs/08-docker.adoc new file mode 100644 index 0000000..dd49135 --- /dev/null +++ b/docs/08-docker.adoc @@ -0,0 +1,189 @@ +== Docker + +In this lab, we will talk about managing containers for the first time in this tutorial. +Particularly, we will talk about https://www.docker.com/what-docker[Docker] which is the most widely used platform for running containers. + +== Intro + +Remember when we talked about packer, we mentioned a few words about `Immutable Infrastructure` model? +The idea was to package all application dependencies and application itself inside a machine image, so that we don't have to configure the system after start. +Containers implement the same model, but they do it in a more efficient way. + +Containers allow you to create self-contained isolated environments for running your applications. + +They have some significant advantages over VMs in terms of implementing Immutable Infrastructure model: + +* `Containers are much faster to start than VMs.` Container starts in seconds, while a VM takes minutes. +It's important when you're doing an update/rollback or scaling your service. +* `Containers enable better utilization of compute resources.` Very often computer resources of a VM running an application are underutilized. +Launching multiple instances of the same application on one VM has a lot of difficulties: different application versions may need different versions of dependent libraries, init scripts require special configuration. +With containers, running multiple instances of the same application on the same machine is easy and doesn't require any system configuration. +* `Containers are more lightweight than VMs.` Container images are much smaller than machine images, because they don't need a full operating system in order to run. +In fact, a container image can include just a single binary and take just a few MBs of your disk space. +This means that we need less space for storing the images and the process of distributing images goes faster. + +Let's try to implement `Immutable Infrastructure` model with Docker containers, while paying special attention to the `Dockerfile` part as a way to practice `Infrastructure as Code` approach. + +== (FOR PERSONAL LAPTOPS AND WORKSTATIONS ONLY) Install Docker Engine + +_Docker is already installed on Google Cloud Shell._ + +The https://docs.docker.com/engine/docker-overview/#docker-engine[Docker Engine] is the daemon that gets installed on the system and allows you to manage containers with simple CLI. + +https://www.docker.com/community-edition[Install] free Community Edition of Docker Engine on your system. + +Verify that the version of Docker Engine is \=> 17.09.0: + +[source,bash] +---- +$ docker -v +---- + +== (FOR ALL) Create Dockerfile + +You describe a container image that you want to create in a special file called *Dockerfile*. + +Dockerfile contains `instructions` on how the image should be built. +Here are some of the most common instructions that you can meet in a Dockerfile: + +* `FROM` is used to specify a `base image` for this build. +It's similar to the builder configuration which we defined in a Packer template, but in this case instead of describing characteristics of a VM, we simply specify a name of a container image used for build. +This should be the first instruction in the Dockerfile. +* `ADD` and `COPY` are used to copy a file/directory to the container. +See the https://stackoverflow.com/questions/24958140/what-is-the-difference-between-the-copy-and-add-commands-in-a-dockerfile[difference] between the two. +* `RUN` is used to run a command inside the image. +Mostly used for installing packages. +* `ENV` sets an environment variable available within the container. +* `WORKDIR` changes the working directory of the container to a specified path. +It basically works like a `cd` command on Linux. +* `CMD` sets a default command, which will be executed when a container starts. +This should be a command to start your application. + +Let's use these instructions to create a Docker container image for our node-svc application. + +Inside your `my-iac-tutorial` directory, create a directory called `08-docker`, and in it a text file called `Dockerfile` with the following content: + +---- +FROM node:11 +# Create app directory +WORKDIR /app +# Install app dependencies +# A wildcard is used to ensure both package.json AND package-lock.json are copied +# where available (npm@5+) +COPY package*.json ./ +RUN npm install +RUN npm install express +# If you are building your code for production +# RUN npm ci --only=production +# Bundle app source +COPY . /app +EXPOSE 3000 +CMD [ "node", "server.js" ] +---- + +This Dockerfile repeats the steps that we did multiple times by now to configure a running environment for our application and run it. + +We first choose an image that already contains Node of required version: + +---- +# Use base image with node installed +FROM node:11 +---- + +The base image is downloaded from Docker official registry (storage of images) called https://hub.docker.com/[Docker Hub]. + +We then install required system packages and application dependencies: + +---- +# Install app dependencies +# A wildcard is used to ensure both package.json AND package-lock.json are copied +# where available (npm@5+)COPY package*.json ./ +RUN npm install +RUN npm install express +---- + +Then we copy the application itself. + +---- +# create application home directory and copy files +COPY . /app +---- + +Then we specify a default command that should be run when a container from this image starts: + +---- +CMD [ "node", "server.js" ] +---- + +== Build Container Image + +Once you defined how your image should be built, run the following command inside your `my-iac-tutorial` directory to create a container image for the node-svc application: + +[source,bash] +---- +$ docker build -t /node-svc-v1 . +---- + +The resulting image will be named `node-svc`. +Find it in the list of your local images: + +[source,bash] +---- +$ docker images | grep node-svc +---- + +At your option, you can save your build command in a script, such as `build.sh`. + +Now, run the container: + +[source,bash] +---- +$ docker run -d -p 8081:3000 /node-svc-v1 +---- + +Notice the "8081:3000" syntax. +This means that while the container is running on port 3000 internally, it is externally exposed via port 8081. + +Again, you may wish to save this in a script, such as `run.sh`. + +Now, test the container: + +[source,bash] +---- +$ curl localhost:8081 +Successful request. +---- + +Again, you may wish to save this in a script, such as `test.sh`. + +== Save and commit the work + +Save and commit the files created in this lab. + +== Conclusion + +In this lab, you adopted containers for running your application. +This is a different type of technology from what we used to deal with in the previous labs. +Nevertheless, we use Infrastructure as Code approach here, too. + +We describe the configuration of our container image in a Dockerfile using Dockerfile's syntax. +We then save that Dockefile in our application repository. +This way we can build the application image consistently across any environments. + +Destroy the current playground before moving on to the next lab, through `docker ps`, `docker kill`, `docker images`, and `docker rmi`. +In the example below, the container is named "beautiful_pascal". +Yours will be different. +Follow the example, substituting yours. + +[source,bash] +---- +$ docker ps +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +64e60b7b0c81 charlestbetz/node-svc-v1 "docker-entrypoint.s…" 10 minutes ago Up 10 minutes 0.0.0.0:8081->3000/tcp beautiful_pascal +$ docker kill beautiful_pascal +$ docker images +# returns list of your images +$ docker rmi -f +---- + +Next: xref:09-docker-compose.adoc[Docker Compose] diff --git a/docs/08-docker.md b/docs/08-docker.md deleted file mode 100644 index 2e161d7..0000000 --- a/docs/08-docker.md +++ /dev/null @@ -1,205 +0,0 @@ -## Docker - -In this lab, we will talk about managing containers for the first time in this tutorial. Particularly, we will talk about [Docker](https://www.docker.com/what-docker) which is the most widely used platform for running containers. - -## Intro - -Remember when we talked about packer, we mentioned a few words about `Immutable Infrastructure` model? The idea was to package all application dependencies and application itself inside a machine image, so that we don't have to configure the system after start. Containers implement the same model, but they do it in a more efficient way. - -Containers allow you to create self-contained isolated environments for running your applications. - -They have some significant advantages over VMs in terms of implementing Immutable Infrastructure model: - -* `Containers are much faster to start than VMs.` Container starts in seconds, while a VM takes minutes. It's important when you're doing an update/rollback or scaling your service. -* `Containers enable better utilization of compute resources.` Very often computer resources of a VM running an application are underutilized. Launching multiple instances of the same application on one VM has a lot of difficulties: different application versions may need different versions of dependent libraries, init scripts require special configuration. With containers, running multiple instances of the same application on the same machine is easy and doesn't require any system configuration. -* `Containers are more lightweight than VMs.` Container images are much smaller than machine images, because they don't need a full operating system in order to run. In fact, a container image can include just a single binary and take just a few MBs of your disk space. This means that we need less space for storing the images and the process of distributing images goes faster. - -Let's try to implement `Immutable Infrastructure` model with Docker containers, while paying special attention to the `Dockerfile` part as a way to practice `Infrastructure as Code` approach. - -## Install Docker Engine - -The [Docker Engine](https://docs.docker.com/engine/docker-overview/#docker-engine) is the daemon that gets installed on the system and allows you to manage containers with simple CLI. - -[Install](https://www.docker.com/community-edition) free Community Edition of Docker Engine on your system. - -Verify that the version of Docker Engine is => 17.09.0: - -```bash -$ docker -v -``` - -## Create Dockerfile - -You describe a container image that you want to create in a special file called **Dockerfile**. - -Dockerfile contains `instructions` on how the image should be built. Here are some of the most common instructions that you can meet in a Dockerfile: - -* `FROM` is used to specify a `base image` for this build. It's similar to the builder configuration which we defined in a Packer template, but in this case instead of describing characteristics of a VM, we simply specify a name of a container image used for build. This should be the first instruction in the Dockerfile. -* `ADD` and `COPY` are used to copy a file/directory to the container. See the [difference](https://stackoverflow.com/questions/24958140/what-is-the-difference-between-the-copy-and-add-commands-in-a-dockerfile) between the two. -* `RUN` is used to run a command inside the image. Mostly used for installing packages. -* `ENV` sets an environment variable available within the container. -* `WORKDIR` changes the working directory of the container to a specified path. It basically works like a `cd` command on Linux. -* `CMD` sets a default command, which will be executed when a container starts. This should be a command to start your application. - -Let's use these instructions to create a Docker container image for our raddit application. - -Create a file called `Dockerfile` inside your `iac-tutorial` repo with the following content: - -``` -# Use base image with Ruby installed -FROM ruby:2.3 - -# install required system packages -RUN apt-get update -qq && \ - apt-get install -y build-essential - -# create application directory and install dependencies -ENV APP_HOME /app -RUN mkdir $APP_HOME -WORKDIR $APP_HOME -COPY raddit-app/Gemfile* $APP_HOME/ -RUN bundle install - -# Copy the application code to the container -ADD raddit-app/ $APP_HOME -# Run "puma" command on container's start -CMD ["puma"] -``` - -This Dockerfile repeats the steps that we did multiple times by now to configure a running environment for our application and run it. - -We first choose an image that already contains Ruby of required version: -``` -# Use base image with Ruby installed -FROM ruby:2.3 -``` - -The base image is downloaded from Docker official registry (storage of images) called [Docker Hub](https://hub.docker.com/). - -We then install required system packages and application dependencies: - -``` -# install required system packages -RUN apt-get update -qq && \ - apt-get install -y build-essential - -# create application home directory and install dependencies -ENV APP_HOME /app -RUN mkdir $APP_HOME -WORKDIR $APP_HOME -COPY raddit-app/Gemfile* $APP_HOME/ -RUN bundle install -``` - -Then we copy the directory with application code and specify a default command that should be run when a container from this image starts: - -``` -# Copy the application code to the container -ADD raddit-app/ $APP_HOME -# Run "puma" command on container's start -CMD ["puma"] -``` - -## Build Container Image - -Once you defined how your image should be built, run the following command inside `iac-tutorial` directory to create a container image for raddit application: - -```bash -$ docker build --tag raddit . -``` - -The resulting image will be named `raddit`. Find it in the list of your local images: - -```bash -$ docker images | grep raddit -``` - -## Bridge Network - -We are going to run multiple containers in this setup. To allow containers communicate with each other by container names, we'll create a [user-defined bridge network](https://docs.docker.com/engine/userguide/networking/#user-defined-networks): - -```bash -$ docker network create raddit-network -``` - -Verify that the network was created: - -```bash -$ docker network ls -``` - -## MongoDB Container - -We shouldn't forget that we also need a MongoDB for our application to work. - -The philosophy behind containers is that we create one container per process. So we'll run MongoDB in another container. - -We will use a public image from Docker Hub to run a MongoDB container alongside raddit application container. However, I recommend you for the sake of practice write a Dockerfile for MongoDB and create your own image. - -Because MongoDB is a stateful service, we'll first create a named volume for it to persist the data beyond the container removal. - -```bash -$ docker volume create mongo-data -``` - -Check that volume was created: - -```bash -$ docker volume ls | grep mongo-data -``` - -Now run the following command to download a MongodDB image and start a container from it: - -```bash -$ docker run --name mongo-database \ - --volume mongo-data:/data/db \ - --network raddit-network \ - --detach mongo:3.2 -``` - -Verify that the container is running: - -```bash -$ docker container ls -``` - -## Start Application Container - -Start the application container from the image you've built: - -```bash -$ docker run --name raddit-app \ - --env DATABASE_HOST=mongo-database \ - --network raddit-network \ - --publish 9292:9292 \ - --detach raddit -``` - -Note, how we also passed an environment variable with the command to the application container. Since MongoDB is not reachable at `localhost` as it was in the previous labs, [we need to pass the environment variable with MongoDB address](https://github.com/Artemmkin/iac-tutorial/blob/master/raddit-app/app.rb#L11) to tell our application where to connect. Automatic DNS resolution of container names within a user-defined network makes it possible to simply pass the name of a MongoDB container instead of an IP address. - -Port mapping option (`--publish`) that we passed to the command is used to make the container reachable to the outsite world. - -## Access Application - -The application should be accessible to your at http://localhost:9292 - -## Save and commit the work - -Save and commit the `Dockerfile` created in this lab into your `iac-tutorial` repo. - -## Conclusion - -In this lab, you adopted containers for running your application. This is a different type of technology from what we used to deal with in the previous labs. Nevertheless, we use Infrastructure as Code approach here, too. - -We describe the configuration of our container image in a Dockerfile using Dockerfile's syntax. We then save that Dockefile in our application repository. This way we can build the application image consistently across any environments. - -Destroy the current playground before moving on to the next lab. - -```bash -$ docker rm -f mongo-database -$ docker rm -f raddit-app -$ docker volume rm mongo-data -$ docker network rm raddit-network -``` - -Next: [Docker Compose](09-docker-compose.md) diff --git a/docs/09-docker-compose.md b/docs/09-docker-compose.adoc similarity index 57% rename from docs/09-docker-compose.md rename to docs/09-docker-compose.adoc index 4775d1f..bd0e7fe 100644 --- a/docs/09-docker-compose.md +++ b/docs/09-docker-compose.adoc @@ -1,38 +1,48 @@ -## Docker Compose +== Docker Compose + +== PENDING CREATION OF node-svc MULTI-ARCHITECTURE + +== DISREGARD RADDIT APP In the last lab, we learned how to create Docker container images using Dockerfile and implementing Infrastructure as Code approach. -This time we'll learn how to describe in code and manage our local container infrastructure with [Docker Compose](https://docs.docker.com/compose/overview/). +This time we'll learn how to describe in code and manage our local container infrastructure with https://docs.docker.com/compose/overview/[Docker Compose]. -## Intro +== Intro -Remember how in the previous lab we had to use a lot of `docker` CLI commands in order to run our application locally? Specifically, we had to create a network for containers to communicate, a volume for container with MongoDB, launch MongoDB container, launch our application container. +Remember how in the previous lab we had to use a lot of `docker` CLI commands in order to run our application locally? +Specifically, we had to create a network for containers to communicate, a volume for container with MongoDB, launch MongoDB container, launch our application container. -This is a lot of manual work and we only have 2 containers in our setup. Imagine how much work it would be to run a microservices application which includes a dozen of services. +This is a lot of manual work and we only have 2 containers in our setup. +Imagine how much work it would be to run a microservices application which includes a dozen of services. To make the management of our local container infrastructure easier and more reliable, we need a tool that would allow us to describe the desired state of a local environment and then it would create it from our description. -**Docker Compose** is exactly the tool we need. Let's see how we can use it. +*Docker Compose* is exactly the tool we need. +Let's see how we can use it. -## Install Docker Compose +== (FOR PERSONAL LAPTOPS AND WORKSTATIONS ONLY) Install Docker Compose -Follow the official documentation on [how to install Docker Compose](https://docs.docker.com/compose/install/) on your system. +Follow the official documentation on https://docs.docker.com/compose/install/[how to install Docker Compose] on your system. -Verify that installed version of Docker Compose is => 1.18.0: +Verify that installed version of Docker Compose is \=> 1.18.0: -```bash +[source,bash] +---- $ docker-compose -v -``` +---- -## Describe Local Container Infrastructure +== Describe Local Container Infrastructure -Docker Compose could be compared to Terraform, but it manages only Docker container infrastructure. It allows us to start containers, create networks and volumes, pass environment variables to containers, publish ports, etc. +Docker Compose could be compared to Terraform, but it manages only Docker container infrastructure. +It allows us to start containers, create networks and volumes, pass environment variables to containers, publish ports, etc. -Let's use Docker Compose [declarative syntax](https://docs.docker.com/compose/compose-file/) to describe what our local container infrastructure should look like. +Let's use Docker Compose https://docs.docker.com/compose/compose-file/[declarative syntax] to describe what our local container infrastructure should look like. Create a file called `docker-compose.yml` inside your `iac-tutorial` repo with the following content: -```yml +[source,yml] +---- version: '3.3' # define services (containers) that should be running @@ -65,26 +75,29 @@ volumes: # define networks to be created networks: raddit-network: -``` +---- In this compose file, we define 3 sections for configuring different components of our container infrastructure. -Under the **services** section we define what containers we want to run. We give each service a `name` and pass the options such as what `image` to use to launch container for this service, what `volumes` and `networks` should be attached to this container. +Under the *services* section we define what containers we want to run. +We give each service a `name` and pass the options such as what `image` to use to launch container for this service, what `volumes` and `networks` should be attached to this container. If you look at `mongo-database` service definition, you should find it to be very similar to the docker command that we used to start MongoDB container in the previous lab: -```bash +[source,bash] +---- $ docker run --name mongo-database \ --volume mongo-data:/data/db \ --network raddit-network \ --detach mongo:3.2 -``` +---- -So the syntax of Docker Compose can be easily understood by a person not even familiar with it [the documentation](https://docs.docker.com/compose/compose-file/#service-configuration-reference). +So the syntax of Docker Compose can be easily understood by a person not even familiar with it https://docs.docker.com/compose/compose-file/#service-configuration-reference[the documentation]. `raddit-app` services configuration is a bit different from MongoDB service in a way that we specify a `build` option instead of `image` to build the container image from a Dockerfile before starting a container: -```yml +[source,yml] +---- raddit-app: # path to Dockerfile to build an image and start a container build: . @@ -97,60 +110,69 @@ raddit-app: # start raddit-app only after mongod-database service was started depends_on: - mongo-database -``` +---- Also, note the `depends_on` option which allows us to tell Docker Compose that this `raddit-app` service depends on `mongo-database` service and should be started after `mongo-database` container was launched. -The other two top-level sections in this file are **volumes** and **networks**. They are used to define volumes and networks that should be created: +The other two top-level sections in this file are *volumes* and *networks*. +They are used to define volumes and networks that should be created: -```yml +[source,yml] +---- # define volumes to be created volumes: mongo-data: # define networks to be created networks: raddit-network: -``` +---- These basically correspond to the commands that we used in the previous lab to create a named volume and a network: -```bash +[source,bash] +---- $ docker volume create mongo-data $ docker network create raddit-network -``` +---- -## Create Local Infrastructure +== Create Local Infrastructure Once you described the desired state of you infrastructure in `docker-compose.yml` file, tell Docker Compose to create it using the following command: -```bash +[source,bash] +---- $ docker-compose up -``` +---- or use this command to run containers in the background: -```bash +[source,bash] +---- $ docker-compose up -d -``` +---- -## Access Application +== Access Application -The application should be accessible to your as before at http://localhost:9292 +The application should be accessible to your as before via the web preview icon in Google Cloud Shell. +`curl localhost:9292` will at least dump out the HTML (not very pretty, but if you see HTML you know the service is working to some degree at least). -## Save and commit the work +== Save and commit the work Save and commit the `docker-compose.yml` file created in this lab into your `iac-tutorial` repo. -## Conclusion +== Conclusion -In this lab, we learned how to use Docker Compose tool to implement Infrastructure as Code approach to managing a local container infrastructure. This helped us automate and document the process of creating all the necessary components for running our containerized application. +In this lab, we learned how to use Docker Compose tool to implement Infrastructure as Code approach to managing a local container infrastructure. +This helped us automate and document the process of creating all the necessary components for running our containerized application. -If we keep created `docker-compose.yml` file inside the application repository, any of our colleagues can create the same container environment on any system with just one command. This makes Docker Compose a perfect tool for creating local dev environments and simple application deployments. +If we keep created `docker-compose.yml` file inside the application repository, any of our colleagues can create the same container environment on any system with just one command. +This makes Docker Compose a perfect tool for creating local dev environments and simple application deployments. To destroy the local playground, run the following command: -```bash +[source,bash] +---- $ docker-compose down --volumes -``` +---- -Next: [Kubernetes](10-kubernetes.md) +Next: xref:10-kubernetes.adoc[Kubernetes] diff --git a/docs/10-kubernetes.adoc b/docs/10-kubernetes.adoc new file mode 100644 index 0000000..268c7f3 --- /dev/null +++ b/docs/10-kubernetes.adoc @@ -0,0 +1,486 @@ +== Kubernetes + +In the previous labs, we learned how to run Docker containers locally. +Running containers at scale is quite different and a special class of tools, known as *orchestrators*, are used for that task. + +In this lab, we'll take a look at the most popular Open Source orchestration platform called https://kubernetes.io/[Kubernetes] and see how it implements Infrastructure as Code model. + +== Intro + +We used Docker Compose to consistently create container infrastructure on one machine (our local machine). +However, our production environment may include tens or hundreds of VMs to have enough capacity to provide service to a large number of users. +What do you do in that case? + +Running Docker Compose on each VM from the cluster seems like a lot of work. +Besides, if you want your containers running on different hosts to communicate with each other it requires creation of a special type of network called `overlay`, which you can't create using only Docker Compose. + +Moreover, questions arise as to: + +* how to load balance containerized applications? +* how to perform container health checks and ensure the required number of containers is running? + +The world of containers is very different from the world of virtual machines and needs a special platform for management. + +Kubernetes is the most widely used orchestration platform for running and managing containers at scale. +It solves the common problems (some of which we've mentioned above) related to running containers on multiple hosts. +And we'll see in this lab that it uses the Infrastructure as Code approach to managing container infrastructure. + +Let's try to run our `raddit` application on a Kubernetes cluster. + +== (FOR PERSONAL LAPTOPS AND WORKSTATIONS ONLY) Install Kubectl + +Kubectl is installed on the Google Cloud Shell. + +https://kubernetes.io/docs/reference/kubectl/overview/[Kubectl] is command line tool that we will use to run commands against the Kubernetes cluster. + +You can install `kubectl` onto your system as part of Google Cloud SDK by running the following command: + +[source,bash] +---- +$ gcloud components install kubectl +---- + +Check the version of kubectl to make sure it is installed: + +[source,bash] +---- +$ kubectl version +---- + +== Infrastructure as Code project + +Create a new directory called `kubernetes` inside your `iac-tutorial` repo, which we'll use to save the work done in this lab. + +== Describe Kubernetes cluster in Terraform + +We'll use https://cloud.google.com/kubernetes-engine/[Google Kubernetes Engine] (GKE) service to deploy a Kubernetes cluster of 3 nodes. + +We'll describe a Kubernetes cluster using Terraform so that we can manage it through code. + +Create a directory named `terraform` inside `kubernetes` directory. +Create three files within it: + +[source,bash] +---- +variables.tf +terraform.tfvars +main.tf +---- + +=== variables.tf + +[source,bash] +---- +# Provider configuration variables +variable "project_id" { + description = "Project ID in GCP" +} + +variable "region" { + description = "Region in which to manage GCP resources" +} + +# Cluster configuration variables +variable "cluster_name" { + description = "The name of the cluster, unique within the project and zone" +} + +variable "zone" { + description = "The zone in which nodes specified in initial_node_count should be created in" +} +---- + +=== terraform.tfvars + +[source,bash] +---- +// define provider configuration variables +project_id = "some-project-ID" # project in which to create a cluster +region = "some-google-region" # region in which to create a cluster + +// define Kubernetes cluster variables +cluster_name = "iac-tutorial-cluster" # cluster name +zone = "some-google-zone" # zone in which to create a cluster nodes +---- + +=== main.tf + +[source,bash] +---- +resource "google_container_cluster" "primary" { + name = "${var.cluster_name}" + location = "${var.zone}" + initial_node_count = 3 + + master_auth { + username = "" + password = "" + + client_certificate_config { + issue_client_certificate = false + } + } + + # configure kubectl to talk to the cluster + provisioner "local-exec" { + command = "gcloud container clusters get-credentials ${var.cluster_name} --zone ${var.zone} --project ${var.project_id}" + } + + node_config { + oauth_scopes = [ + "https://www.googleapis.com/auth/compute", + "https://www.googleapis.com/auth/devstorage.read_only", + "https://www.googleapis.com/auth/logging.write", + "https://www.googleapis.com/auth/monitoring", + ] + + metadata = { + disable-legacy-endpoints = "true" + } + + tags = ["iac-kubernetes"] + } + + timeouts { + create = "30m" + update = "40m" + } +} + +# create firewall rule to allow access to application +resource "google_compute_firewall" "nodeports" { + name = "node-port-range" + network = "default" + + allow { + protocol = "tcp" + ports = ["30000-32767"] + } + source_ranges = ["0.0.0.0/0"] +} +---- + +We'll use this Terraform code to create a Kubernetes cluster. + +== Create Kubernetes Cluster + +`main.tf` holds all the information about the cluster that should be created. +It's parameterized using Terraform https://www.terraform.io/intro/getting-started/variables.html[input variables] which allow you to easily change configuration parameters. + +Look into `terraform.tfvars` file which contains definitions of the input variables and change them if necessary. +You'll most probably want to change `project_id` value. + +---- +// define provider configuration variables +project_id = "infrastructure-as-code" # project in which to create a cluster +region = "europe-west1" # region in which to create a cluster + +// define Kubernetes cluster variables +cluster_name = "iac-tutorial-cluster" # cluster name +zone = "europe-west1-b" # zone in which to create a cluster nodes +---- + +After you've defined the variables, run Terraform inside `kubernetes/terraform` to create a Kubernetes cluster consisting of 2 nodes (VMs for running our application containers). + +[source,bash] +---- +$ gcloud services enable container.googleapis.com # enable Kubernetes Engine API +$ terraform init +$ terraform apply +---- + +Wait until Terraform finishes creation of the cluster. +It can take about 3-5 minutes. + +Check that the cluster is running and `kubectl` is properly configured to communicate with it by fetching cluster information: + +[source,bash] +---- +$ kubectl cluster-info + +Kubernetes master is running at https://35.200.56.100 +GLBCDefaultBackend is running at https://35.200.56.100/api/v1/namespaces/kube-system/services/default-http-backend/proxy +... +---- + +== Deployment manifest + +Kubernetes implements Infrastructure as Code approach to managing container infrastructure. +It uses special entities called *objects* to represent the `desired state` of your cluster. +With objects you can describe + +* What containerized applications are running (and on which nodes) +* The compute resources available to those applications +* The policies around how those applications behave, such as restart policies, upgrades, and fault-tolerance + +By creating an object, you're effectively telling the Kubernetes system what you want your cluster's workload to look like; +this is your cluster's `desired state`. +Kubernetes then makes sure that the cluster's actual state meets the desired state described in the object. + +Most of the times, you describe the object in a `.yaml` file called `manifest` and then give it to `kubectl` which in turn is responsible for relaying that information to Kubernetes via its API. + +*Deployment object* represents an application running on your cluster. +We'll use it to run containers of our applications. + +Create a directory called `manifests` inside `kubernetes` directory. +Create a `deployments.yaml` file inside it with the following content: + +[source,yaml] +---- +apiVersion: apps/v1beta1 # implies the use of kubernetes 1.7 + # use apps/v1beta2 for kubernetes 1.8 +kind: Deployment +metadata: + name: raddit-deployment +spec: + replicas: 2 + selector: + matchLabels: + app: raddit + template: + metadata: + labels: + app: raddit + spec: + containers: + - name: raddit + image: dmacademy/raddit + env: + - name: DATABASE_HOST + value: mongo-service +--- +apiVersion: apps/v1beta1 # implies the use of kubernetes 1.7 + # use apps/v1beta2 for kubernetes 1.8 +kind: Deployment +metadata: + name: mongo-deployment +spec: + replicas: 1 + selector: + matchLabels: + app: mongo + template: + metadata: + labels: + app: mongo + spec: + containers: + - name: mongo + image: mongo:3.2 +---- + +In this file we describe two `Deployment objects` which define what application containers and in what quantity should be run. +The Deployment objects have the same structure so I'll briefly go over only one of them. + +Each Kubernetes object has 4 required fields: + +* `apiVersion` - Which version of the Kubernetes API you're using to create this object. +You'll need to change that if you're using Kubernetes API version different than 1.7 as in this example. +* `kind` - What kind of object you want to create. +In this case we create a Deployment object. +* `metadata` - Data that helps uniquely identify the object. +In this example, we give the deployment object a name according to the name of an application it's used to run. +* `spec` - describes the `desired state` for the object. +`Spec` configuration will differ from object to object, because different objects are used for different purposes. + +In the Deployment object's spec we specify, how many `replicas` (instances of the same application) we want to run and what those applications are (`selector`) + +[source,yml] +---- +spec: + replicas: 2 + selector: + matchLabels: + app: raddit +---- + +In our case, we specify that we want to be running 2 instances of applications that have a label `app=raddit`. +*Labels* are used to give identifying attributes to Kubernetes objects and can be then used by *label selectors* for objects selection. + +We also specify a `Pod template` in the spec configuration. +*Pods* are lower level objects than Deployments and are used to run only `a single instance of application`. +In most cases, Pod is equal to a container, although you can run multiple containers in a single Pod. + +The `Pod template` which is a Pod object's definition nested inside the Deployment object. +It has the required object fields such as `metadata` and `spec`, but it doesn't have `apiVersion` and `kind` fields as those would be redundant in this case. +When we create a Deployment object, the Pod object(s) will be created as well. +The number of Pods will be equal to the number of `replicas` specified. +The Deployment object ensures that the right number of Pods (`replicas`) is always running. + +In the Pod object definition (`Pod template`) we specify container information such as a container image name, a container name, which is used by Kubernetes to run the application. +We also add labels to identify what application this Pod object is used to run, this label value is then used by the `selector` field in the Deployment object to select the right Pod object. + +[source,yaml] +---- + template: + metadata: + labels: + app: raddit + spec: + containers: + - name: raddit + image: dmacademy/raddit + env: + - name: DATABASE_HOST + value: mongo-service +---- + +Notice how we also pass an environment variable to the container. +`DATABASE_HOST` variable tells our application how to contact the database. +We define `mongo-service` as its value to specify the name of the Kubernetes service to contact (more about the Services will be in the next section). + +Container images will be downloaded from Docker Hub in this case: the generic mongo container and the raddit image uploaded to the dmacademy organization. + +_It would be nice if we could use the locally built raddit image. +Extra credit for anyone who can figure out how to do that._ + +== Create Deployment Objects + +Run a kubectl command to create Deployment objects inside your Kubernetes cluster (make sure to provide the correct path to the manifest file): + +[source,bash] +---- +$ kubectl apply -f manifests/deployments.yaml +---- + +Check the deployments and pods that have been created: + +[source,bash] +---- +$ kubectl get deploy +$ kubectl get pods +---- + +== Service manifests + +Running applications at scale means running _multiple containers spread across multiple VMs_. + +This arises questions such as: How do we load balance between all of these application containers? +How do we provide a single entry point for the application so that we could connect to it via that entry point instead of connecting to a particular container? + +These questions are addressed by the *Service* object in Kubernetes. +A Service is an abstraction which you can use to logically group containers (Pods) running in you cluster, that all provide the same functionality. + +When a Service object is created, it is assigned a unique IP address called `clusterIP` (a single entry point for our application). +Other Pods can then be configured to talk to the Service, and the Service will load balance the requests to containers (Pods) that are members of that Service. + +We'll create a Service for each of our applications, i.e. +`raddit` and `MondoDB`. +Create a file called `services.yaml` inside `kubernetes/manifests` directory with the following content: + +[source,yaml] +---- +apiVersion: v1 +kind: Service +metadata: + name: raddit-service +spec: + type: NodePort + selector: + app: raddit + ports: + - protocol: TCP + port: 9292 + targetPort: 9292 + nodePort: 30100 +--- +apiVersion: v1 +kind: Service +metadata: + name: mongo-service +spec: + type: ClusterIP + selector: + app: mongo + ports: + - protocol: TCP + port: 27017 + targetPort: 27017 +---- + +In this manifest, we describe 2 Service objects of different types. +You should be already familiar with the general object structure, so I'll just go over the `spec` field which defines the desired state of the object. + +The `raddit` Service has a NodePort type: + +[source,yaml] +---- +spec: + type: NodePort +---- + +This type of Service makes the Service accessible on each Node's IP at a static port (NodePort). +We use this type to be able to contact the `raddit` application later from outside the cluster. + +`selector` field is used to identify a set of Pods to which to route packets that the Service receives. +In this case, Pods that have a label `app=raddit` will become part of this Service. + +[source,yaml] +---- + selector: + app: raddit +---- + +The `ports` section specifies the port mapping between a Service and Pods that are part of this Service and also contains definition of a node port number (`nodePort`) which we will use to reach the Service from outside the cluster. + +[source,yaml] +---- + ports: + - protocol: TCP + port: 9292 + targetPort: 9292 + nodePort: 30100 +---- + +The requests that come to any of your cluster nodes' public IP addresses on the specified `nodePort` will be routed to the `raddit` Service cluster-internal IP address. +The Service, which is listening on port 9292 (`port`) and is accessible within the cluster on this port, will then route the packets to the `targetPort` on one of the Pods which is part of this Service. + +`mongo` Service is only different in its type. +`ClusterIP` type of Service will make the Service accessible on the cluster-internal IP, so you won't be able to reach it from outside the cluster. + +== Create Service Objects + +Run a kubectl command to create Service objects inside your Kubernetes cluster (make sure to provide the correct path to the manifest file): + +[source,bash] +---- +$ kubectl apply -f manifests/services.yaml +---- + +Check that the services have been created: + +[source,bash] +---- +$ kubectl get svc +---- + +== Access Application + +Because we used `NodePort` type of service for the `raddit` service, our application should accessible to us on the IP address of any of our cluster nodes. + +Get a list of IP addresses of your cluster nodes: + +[source,bash] +---- +$ gcloud --format="value(networkInterfaces[0].accessConfigs[0].natIP)" compute instances list --filter="tags.items=iac-kubernetes" +---- + +Use any of your nodes public IP addresses and the node port `30100` which we specified in the service object definition to reach the `raddit` application in your browser. + +== Save and commit the work + +Save and commit the `kubernetes` folder created in this lab into your `iac-tutorial` repo. + +== Conclusion + +In this lab, we learned about Kuberenetes - a popular orchestration platform which simplifies the process of running containers at scale. +We saw how it implements the Infrastructure as Code approach in the form of `objects` and `manifests` which allow you to describe in code the desired state of your container infrastructure which spans a cluster of VMs. + +To destroy the Kubernetes cluster, run the following command inside `kubernetes/terraform` directory: + +[source,bash] +---- +$ terraform destroy +---- + +Next: xref:50-what-is-iac.adoc[What is Infrastructure as Code] diff --git a/docs/10-kubernetes.md b/docs/10-kubernetes.md deleted file mode 100644 index 3393cb2..0000000 --- a/docs/10-kubernetes.md +++ /dev/null @@ -1,325 +0,0 @@ -## Kubernetes - -In the previous labs, we learned how to run Docker containers locally. Running containers at scale is quite different and a special class of tools, known as **orchestrators**, are used for that task. - -In this lab, we'll take a look at the most popular Open Source orchestration platform called [Kubernetes](https://kubernetes.io/) and see how it implements Infrastructure as Code model. - -## Intro - -We used Docker Compose to consistently create container infrastructure on one machine (our local machine). However, our production environment may include tens or hundreds of VMs to have enough capacity to provide service to a large number of users. What do you do in that case? - -Running Docker Compose on each VM from the cluster seems like a lot of work. Besides, if you want your containers running on different hosts to communicate with each other it requires creation of a special type of network called `overlay`, which you can't create using only Docker Compose. - -Moreover, questions arise as to: -* how to load balance containerized applications? -* how to perform container health checks and ensure the required number of containers is running? - -The world of containers is very different from the world of virtual machines and needs a special platform for management. - -Kubernetes is the most widely used orchestration platform for running and managing containers at scale. It solves the common problems (some of which we've mentioned above) related to running containers on multiple hosts. And we'll see in this lab that it uses the Infrastructure as Code approach to managing container infrastructure. - -Let's try to run our `raddit` application on a Kubernetes cluster. - -## Install Kubectl - -[Kubectl](https://kubernetes.io/docs/reference/kubectl/overview/) is command line tool that we will use to run commands against the Kubernetes cluster. - -You can install `kubectl` onto your system as part of Google Cloud SDK by running the following command: - -```bash -$ gcloud components install kubectl -``` - -Check the version of kubectl to make sure it is installed: - -```bash -$ kubectl version -``` - -## Infrastructure as Code project - -Create a new directory called `kubernetes` inside your `iac-tutorial` repo, which we'll use to save the work done in this lab. - -## Describe Kubernetes cluster in Terraform - -We'll use [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/) (GKE) service to deploy a Kubernetes cluster of 2 nodes. - -We'll describe a Kubernetes cluster using Terraform so that we can manage it through code. - -Create a directory named `terraform` inside `kubernetes` directory. Download a bundle of Terraform configuration files into the created `terraform` directory. - -```bash -$ wget https://github.com/Artemmkin/gke-terraform/raw/master/gke-terraform.zip -$ unzip gke-terraform.zip -d kubernetes/terraform -$ rm gke-terraform.zip -``` - -We'll use this Terraform code to create a Kubernetes cluster. - -## Create Kubernetes Cluster - -`main.tf` which you downloaded holds all the information about the cluster that should be created. It's parameterized using Terraform [input variables](https://www.terraform.io/intro/getting-started/variables.html) which allow you to easily change configuration parameters. - -Look into `terraform.tfvars` file which contains definitions of the input variables and change them if necessary. You'll most probably want to change `project_id` value. - -``` -// define provider configuration variables -project_id = "infrastructure-as-code" # project in which to create a cluster -region = "europe-west1" # region in which to create a cluster - -// define Kubernetes cluster variables -cluster_name = "iac-tutorial-cluster" # cluster name -zone = "europe-west1-b" # zone in which to create a cluster nodes -``` -After you've defined the variables, run Terraform inside `kubernetes/terraform` to create a Kubernetes cluster consisting of 2 nodes (VMs for running our application containers). - -```bash -$ gcloud services enable container.googleapis.com # enable Kubernetes Engine API -$ terraform init -$ terraform apply -``` - -Wait until Terraform finishes creation of the cluster. It can take about 3-5 minutes. - -Check that the cluster is running and `kubectl` is properly configured to communicate with it by fetching cluster information: - -```bash -$ kubectl cluster-info - -Kubernetes master is running at https://35.200.56.100 -GLBCDefaultBackend is running at https://35.200.56.100/api/v1/namespaces/kube-system/services/default-http-backend/proxy -... -``` - -## Deployment manifest - -Kubernetes implements Infrastructure as Code approach to managing container infrastructure. It uses special entities called **objects** to represent the `desired state` of your cluster. With objects you can describe - -* What containerized applications are running (and on which nodes) -* The compute resources available to those applications -* The policies around how those applications behave, such as restart policies, upgrades, and fault-tolerance - -By creating an object, you’re effectively telling the Kubernetes system what you want your cluster’s workload to look like; this is your cluster’s `desired state`. Kubernetes then makes sure that the cluster's actual state meets the desired state described in the object. - -Most of the times, you describe the object in a `.yaml` file called `manifest` and then give it to `kubectl` which in turn is responsible for relaying that information to Kubernetes via its API. - -**Deployment object** represents an application running on your cluster. We'll use it to run containers of our applications. - -Create a directory called `manifests` inside `kubernetes` directory. Create a `deployments.yaml` file inside it with the following content: - -```yaml -apiVersion: apps/v1beta1 # implies the use of kubernetes 1.7 - # use apps/v1beta2 for kubernetes 1.8 -kind: Deployment -metadata: - name: raddit-deployment -spec: - replicas: 2 - selector: - matchLabels: - app: raddit - template: - metadata: - labels: - app: raddit - spec: - containers: - - name: raddit - image: artemkin/raddit - env: - - name: DATABASE_HOST - value: mongo-service ---- -apiVersion: apps/v1beta1 # implies the use of kubernetes 1.7 - # use apps/v1beta2 for kubernetes 1.8 -kind: Deployment -metadata: - name: mongo-deployment -spec: - replicas: 1 - selector: - matchLabels: - app: mongo - template: - metadata: - labels: - app: mongo - spec: - containers: - - name: mongo - image: mongo:3.2 -``` - -In this file we describe two `Deployment objects` which define what application containers and in what quantity should be run. The Deployment objects have the same structure so I'll briefly go over only one of them. - -Each Kubernetes object has 4 required fields: -* `apiVersion` - Which version of the Kubernetes API you’re using to create this object. You'll need to change that if you're using Kubernetes API version different than 1.7 as in this example. -* `kind` - What kind of object you want to create. In this case we create a Deployment object. -* `metadata` - Data that helps uniquely identify the object. In this example, we give the deployment object a name according to the name of an application it's used to run. -* `spec` - describes the `desired state` for the object. `Spec` configuration will differ from object to object, because different objects are used for different purposes. - -In the Deployment object's spec we specify, how many `replicas` (instances of the same application) we want to run and what those applications are (`selector`) - -```yml -spec: - replicas: 2 - selector: - matchLabels: - app: raddit -``` - -In our case, we specify that we want to be running 2 instances of applications that have a label `app=raddit`. **Labels** are used to give identifying attributes to Kubernetes objects and can be then used by **label selectors** for objects selection. - -We also specify a `Pod template` in the spec configuration. **Pods** are lower level objects than Deployments and are used to run only `a single instance of application`. In most cases, Pod is equal to a container, although you can run multiple containers in a single Pod. - -The `Pod template` which is a Pod object's definition nested inside the Deployment object. It has the required object fields such as `metadata` and `spec`, but it doesn't have `apiVersion` and `kind` fields as those would be redundant in this case. When we create a Deployment object, the Pod object(s) will be created as well. The number of Pods will be equal to the number of `replicas` specified. The Deployment object ensures that the right number of Pods (`replicas`) is always running. - -In the Pod object definition (`Pod template`) we specify container information such as a container image name, a container name, which is used by Kubernetes to run the application. We also add labels to identify what application this Pod object is used to run, this label value is then used by the `selector` field in the Deployment object to select the right Pod object. - -```yaml - template: - metadata: - labels: - app: raddit - spec: - containers: - - name: raddit - image: artemkin/raddit - env: - - name: DATABASE_HOST - value: mongo-service -``` - -Notice how we also pass an environment variable to the container. `DATABASE_HOST` variable tells our application how to contact the database. We define `mongo-service` as its value to specify the name of the Kubernetes service to contact (more about the Services will be in the next section). - -Container images will be downloaded from Docker Hub in this case. - -## Create Deployment Objects - -Run a kubectl command to create Deployment objects inside your Kubernetes cluster (make sure to provide the correct path to the manifest file): - -```bash -$ kubectl apply -f manifests/deployments.yaml -``` - -Check the deployments and pods that have been created: - -```bash -$ kubectl get deploy -$ kubectl get pods -``` - -## Service manifests - -Running applications at scale means running _multiple containers spread across multiple VMs_. - -This arises questions such as: How do we load balance between all of these application containers? How do we provide a single entry point for the application so that we could connect to it via that entry point instead of connecting to a particular container? - -These questions are addressed by the **Service** object in Kubernetes. A Service is an abstraction which you can use to logically group containers (Pods) running in you cluster, that all provide the same functionality. - -When a Service object is created, it is assigned a unique IP address called `clusterIP` (a single entry point for our application). Other Pods can then be configured to talk to the Service, and the Service will load balance the requests to containers (Pods) that are members of that Service. - -We'll create a Service for each of our applications, i.e. `raddit` and `MondoDB`. Create a file called `services.yaml` inside `kubernetes/manifests` directory with the following content: - -```yaml -apiVersion: v1 -kind: Service -metadata: - name: raddit-service -spec: - type: NodePort - selector: - app: raddit - ports: - - protocol: TCP - port: 9292 - targetPort: 9292 - nodePort: 30100 ---- -apiVersion: v1 -kind: Service -metadata: - name: mongo-service -spec: - type: ClusterIP - selector: - app: mongo - ports: - - protocol: TCP - port: 27017 - targetPort: 27017 -``` - -In this manifest, we describe 2 Service objects of different types. You should be already familiar with the general object structure, so I'll just go over the `spec` field which defines the desired state of the object. - -The `raddit` Service has a NodePort type: - -```yaml -spec: - type: NodePort -``` - -This type of Service makes the Service accessible on each Node’s IP at a static port (NodePort). We use this type to be able to contact the `raddit` application later from outside the cluster. - -`selector` field is used to identify a set of Pods to which to route packets that the Service receives. In this case, Pods that have a label `app=raddit` will become part of this Service. - -```yaml - selector: - app: raddit -``` - -The `ports` section specifies the port mapping between a Service and Pods that are part of this Service and also contains definition of a node port number (`nodePort`) which we will use to reach the Service from outside the cluster. - -```yaml - ports: - - protocol: TCP - port: 9292 - targetPort: 9292 - nodePort: 30100 -``` - -The requests that come to any of your cluster nodes' public IP addresses on the specified `nodePort` will be routed to the `raddit` Service cluster-internal IP address. The Service, which is listening on port 9292 (`port`) and is accessible within the cluster on this port, will then route the packets to the `targetPort` on one of the Pods which is part of this Service. - -`mongo` Service is only different in its type. `ClusterIP` type of Service will make the Service accessible on the cluster-internal IP, so you won't be able to reach it from outside the cluster. - -## Create Service Objects - -Run a kubectl command to create Service objects inside your Kubernetes cluster (make sure to provide the correct path to the manifest file): - -```bash -$ kubectl apply -f manifests/services.yaml -``` - -Check that the services have been created: - -```bash -$ kubectl get svc -``` - -## Access Application - -Because we used `NodePort` type of service for the `raddit` service, our application should accessible to us on the IP address of any of our cluster nodes. - -Get a list of IP addresses of your cluster nodes: - -```bash -$ gcloud --format="value(networkInterfaces[0].accessConfigs[0].natIP)" compute instances list --filter="tags.items=iac-kubernetes" -``` - -Use any of your nodes public IP addresses and the node port `30100` which we specified in the service object definition to reach the `raddit` application in your browser. - -## Save and commit the work - -Save and commit the `kubernetes` folder created in this lab into your `iac-tutorial` repo. - -## Conclusion - -In this lab, we learned about Kuberenetes - a popular orchestration platform which simplifies the process of running containers at scale. We saw how it implements the Infrastructure as Code approach in the form of `objects` and `manifests` which allow you to describe in code the desired state of your container infrastructure which spans a cluster of VMs. - -To destroy the Kubernetes cluster, run the following command inside `kubernetes/terraform` directory: - -```bash -$ terraform destroy -``` - -Next: [What is Infrastructure as Code](50-what-is-iac.md) diff --git a/docs/50-what-is-iac.adoc b/docs/50-what-is-iac.adoc new file mode 100644 index 0000000..62e3bfe --- /dev/null +++ b/docs/50-what-is-iac.adoc @@ -0,0 +1,22 @@ += What is Infrastructure as Code? + +You've come a long way going through all the labs and learning about different Infrastructure as Code tools. +Some sort of presentation of what Infrastructure as Code is should already be shaped in your head. + +To conclude this tutorial, I summarize some of the key points about what Infrastructure as Code means. + +. `We use code to describe infrastructure`. +We don't use UI to launch a VM, we decribe its desired characteristics in code and tell the tool to do that. +. `Everyone is using the same tested code for infrastructure management operations and not creating its own implementation each time`. +We talked about it when discussing downsides of scripts. +Common infrastructure management operations should rely on tested code functions which are used in the team. +It makes everyday operations more time efficient and less error-prone. +. `Automated operations`. +We don't run commands ourselves to launch and configure a system, but instead use a configuration syntax provided by IaC tool to tell it what should be done. +. `We apply software development practices to infrastructure`. +In software development, practices like keeping code in source control or peer reviews are very common. +They make development reliable and working in a team possible. +Since our infrastructure is described in code, we can apply the same practices to our infrastructure work. + +These are the points that I would make for now. +If you feel like there is something else to add or change, please feel free to send a pull request :) diff --git a/docs/50-what-is-iac.md b/docs/50-what-is-iac.md deleted file mode 100644 index cc8693d..0000000 --- a/docs/50-what-is-iac.md +++ /dev/null @@ -1,12 +0,0 @@ -# What is Infrastructure as Code? - -You've come a long way going through all the labs and learning about different Infrastructure as Code tools. Some sort of presentation of what Infrastructure as Code is should already be shaped in your head. - -To conclude this tutorial, I summarize some of the key points about what Infrastructure as Code means. - -1. `We use code to describe infrastructure`. We don't use UI to launch a VM, we decribe its desired characteristics in code and tell the tool to do that. -2. `Everyone is using the same tested code for infrastructure management operations and not creating its own implementation each time`. We talked about it when discussing downsides of scripts. Common infrastructure management operations should rely on tested code functions which are used in the team. It makes everyday operations more time efficient and less error-prone. -3. `Automated operations`. We don't run commands ourselves to launch and configure a system, but instead use a configuration syntax provided by IaC tool to tell it what should be done. -4. `We apply software development practices to infrastructure`. In software development, practices like keeping code in source control or peer reviews are very common. They make development reliable and working in a team possible. Since our infrastructure is described in code, we can apply the same practices to our infrastructure work. - -These are the points that I would make for now. If you feel like there is something else to add or change, please feel free to send a pull request :) diff --git a/docs/convert.sh b/docs/convert.sh new file mode 100755 index 0000000..d600e82 --- /dev/null +++ b/docs/convert.sh @@ -0,0 +1,8 @@ +#!/bin/bash + +find ./ -name "*.md" \ + -type f | xargs -I @@ \ + bash -c 'kramdoc \ + --format=GFM \ + --wrap=ventilate \ + --output=./@@.adoc ./@@'; diff --git a/docs/convert2.sh b/docs/convert2.sh new file mode 100755 index 0000000..5d89e8d --- /dev/null +++ b/docs/convert2.sh @@ -0,0 +1,4 @@ +kramdoc --format=GFM \ + --output=./00-introduction.adoc \ + --wrap=ventilate \ + ./00-introduction.md diff --git a/img/webPreview.png b/img/webPreview.png new file mode 100644 index 0000000..2842684 Binary files /dev/null and b/img/webPreview.png differ diff --git a/img/webPreviewPort.png b/img/webPreviewPort.png new file mode 100644 index 0000000..87570cd Binary files /dev/null and b/img/webPreviewPort.png differ