Deploying the first master
You are going to deploy the Redis master, which you will delete in the next step. This is done for no other reason than for you to learn about ConfigMaps.
Let's do this.
- Open your friendly cloud shell, as highlighted in the following screenshot:
- Type the following:
kubectl apply -f https://k8s.io/examples/application/guestbook/redis-master-deployment.yaml
It will take some time for it to download and start running. While you wait, let me explain the command you just typed and executed. Let's start by exploring the content of the yaml file you used:
1 apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
2 kind: Deployment
3 metadata:
4 name: redis-master
5 labels:
6 app: redis
7 spec:
8 selector:
9 matchLabels:
10 app: redis
11 role: master
12 tier: backend
13 replicas: 1
14 template:
15 metadata:
16 labels:
17 app: redis
18 role: master
19 tier: backend
20 spec:
21 containers:
22 - name: master
23 image: k8s.gcr.io/redis:e2e # or just image: redis
24 resources:
25 requests:
26 cpu: 100m
27 memory: 100Mi
28 ports:
29 - containerPort: 6379
We will go inside the code to get straight to the meat of it:
- Line 23: Says what Docker image we are going to run. In this case, it is the redis image tagged with e2e (presumably the latest image of redis that successfully passed its end-to-end [e2e] tests).
- Lines 28-29: Say this container is going to listen on port 6379.
- Line 22: Gives this container a name, which is master.
- Lines 24-27: Sets the cpu/memory resources requested for the container. In this case, the request is 0.1 CPU, which is equal to 100m and is also often referred to as 100 millicores. The memory requested is 100Mi, or 104857600 bytes, which is equal to ~105M (https://Kubernetes .io/docs/concepts/configuration/manage-compute-resources-container/ ). You can also set cpu and memory limits the same way.
This is very similar to the arguments you will give to Docker to run a particular container image. If you had to run this manually, you would start and end getting something like the following:
docker run -d k8s.gcr.io/redis:e2e # run the redis docker image with tag e2e in detached mode
docker run --name named_master -d k8s.gcr.io/redis:e2e # run the image with the name test_master
docker run --name net_master -p 6379:6379 -d k8s.gcr.io/redis:e2e # expose the port 6379
docker run --name master -p 6379:6379 -m 100M -c 100m -d k8s.gcr.io/redis:e2e # set the cpu and memory limits
The container spec (lines 21-29) tells Kubernetes to run the specified container with the supplied arguments. So far, Kubernetes has not provided us anything more than what we could have typed in as a Docker command. Let's continue with the explanation of the code:
- Line 13: It tells Kubernetes that we need exactly only one copy of the Redis master running. This is a key aspect of the declarative nature of Kubernetes. You provide a description of the containers your applications need to run (in this case, only one replica of the Redis master) and Kubernetes takes care of it.
- Lines 14-19: Adds labels to the running instance so that it can be grouped and connected to other containers. We will discuss them later to see how they are used.
- Line 2: Tells we would like a Deployment to be performed. When Kubernetes started, Replication Controllers were used (and are still used widely) to launch containers. You can still do most of the work you need using just Replication Controllers. Deployment adds convenience to managing RC. Deployments provide mechanisms to perform rollout changes and rollback if required. You can specify the strategy you would like to use when pushing an update (Rolling Update or Recreate).
- Line 4-6: Gives the Deployment a name, which is redis-master.
- Line 7-12: Let's us specify the containers that this Deployment will manage. In our example, it says that this Deployment will select and manage all containers for which labels match (app == redis, role == master, tier == backend). The preceding exactly matches the labels in lines 14-19.