Summary
Docker and Kubernetes
Docker
- Understanding Docker Concepts
- Basic Docker Command
- Dockerfile
- Docker Layer
- Health Check
- Restart Policies
- Docker Volumes
- Docker Network
Docker Compose
Maven Docker Plugin
Kubernetes
- Understanding Kubernetes
- Pods
- Update Resource Template
- Liveness Probe
- ReplicaSet
- Service
- Readiness Probe
- Deployment
- Environment Variable
- Config Maps
- Secrets
- Host Path
Understanding Docker Concepts
Why Docker?
- To pack an application with all the dependencies it needs into a single, standardized unit for the deployment.
- Packaging all of this into a complete image guarantees that it is portable.
- It will always run in the same way, no matter what environment it is deployed in.
- Tool that helps in solving the common problems such as installing, distributing, and managing the software.
Docker images
- Think of an image as a read-only template which is a base foundation for creating container.
- Docker images are executable packages that include everything needed to run an application.
- It includes the code, a runtime, libraries, environment variables, and configuration files.
- It can also include an application server like Tomcat, Netty and/or application itself.
- Images are created using a series of commands, called instructions.
- Instructions are placed in the Dockerfile which we will learn later.
Docker Containers
- A running instance of an image is called a container.
- Docker containers are a runtime instance of an image.
- What the image becomes in memory when executed.
- It is an image with state, or a user process.
+-----------------------+ +-----------------------+
| Container | | Container |
| +---------------+ | | |
| | Application | | | |
| +---------------+ | | |
| +---------------+ | | +---------------+ |
| | Java | | | | MySQL | |
| +---------------+ | | +---------------+ |
| +---------------+ | | +---------------+ |
| | Alpine | | | | Ubuntu | |
| +---------------+ | | +---------------+ |
| | | |
+-----------------------+ +-----------------------+
Basic Commands
Running container
docker run @image
- Let's start with simple image hello-world.
- To run that image execute docker run hello-world.
- Here's a output you will get
➜ docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
1b930d010525: Pull complete
Digest: sha256:2557e3c07ed1e38f26e389462d03ed943586f744621577a99efb77324b0fe535
Status: Downloaded newer image for hello-world:latest
Hello from Docker!
This message shows that your installation appears to be working correctly.
...
...
- If you look at the output you will see that docker does few things for us.
- It will check if the image is available locally.
- If not, it will be pulled down from the remote repository.
- It initializes the image's name and resources containers needs like CPU, IP, Memory etc.
- After that it will run the container.
- When container is started, it will execute a command.
- Command is defined in image itself. We will talk about this later.
Listing local images
docker image list
- Or docker images
- Lists the images that are available in your local system.
- You can use any version of command you like.
➜ docker image list
REPOSITORY TAG IMAGE ID CREATED SIZE
hello-world latest fce289e99eb9 6 weeks ago 1.84kB
Listing container
- When we ran the image with docker run it creates a container. Duh!! it is the definition of container.
- How can you see list of your container?
docker container list
- Or docker ps
- Use any of the command above and you will see, list of running container.
➜ docker container list
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
- I don't see our container here, why?
- Because hello-world is a simple image.
- It executes one command and exits, hence container is stopped.
--all
- If you want to see container that are stopped you can use --all option.
- docker container list --all OR docker ps --all
➜ docker container list --all
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
bb11cd638c55 hello-world "/hello" 9 minutes ago Exited (0) 9 minutes ago stoic_shtern
- Now we are able to see our container in the list.
Override default image command
- After docker run @image we can pass @command and @argument it accepts.
- Like docker run @image @command @arguments.
- For this we will use another simple image busybox.
- busybox has simple unix command which we can use.
docker run busybox sleep 5000
- docker run busybox sleep 5000
- This will run a busybox and at the end it will execute the command sleep 5000.
- This kind of emulates a long running process, as container will not stop unless command exits.
- If you list the container right now (in new terminal of course), you will see status is different.
- It is not Exited.
➜ docker container list
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d7acdce83ad6 busybox "sleep 5000" 23 seconds ago Up 21 seconds gallant_hodgkin
Stopping running containers
docker container stop @id / @name
- Or docker stop @id / @name
- To stop the container you can either use name or you can use name of the container.
- If you list the image again (don't forget --all) you will see updated status for the container.
- If you pass space separated @id / @name to stop more than one container.
Running container in background
- When we ran docker run busybox sleep 5000, it blocked out terminal.
- If you don't want that to happen, we have an option to run the container in background mode.
-d option
- If you pass -d option, when running the container it will run in background.
- docker run @options @image @command @arguments
- docker run -d busybox sleep 5000
- If you run above command, it will echo container id and resume it's process in background.
Start the stopped container
docker start @name/@id
- It will start the existing container instead of creating new one.
- Not recommended, as container are supposed to be disposable.
Running additional command inside container
Attach to running container terminal
-it
option
-it
option will attach container terminal to our host- After that we can issue command directly to container.
- After that every command you issue will be inside the container.
- (Play around with some command)
- If you want to get out of the container you can exit the terminal.
- If default command in image is not shell, we can pass shell as
@argument
. docker run -it busybox /bin/sh
docker container exec
docker container exec
is used to execute command inside running container.docker container exec @id @command
- But we can use same option here
-it
and pass in shell as command. (Which is default forbusybox
) - So
docker container exec -it @id
will attach container terminal to host. - If you run
ps aux
inside the container, you can seesleep 5000
is a running process.
➜ docker container exec -it @id
/ # ps aux
PID USER TIME COMMAND
1 root 0:00 sleep 5000
9 root 0:00 /bin/sh
16 root 0:00 ps aux
Attach when starting stopped
container
docker start -ia @name/@id
- If you want to attach the terminal, the option is bit different i.e
-ia
Deleting Container
Delete all stopped container
- To delete all stopped container, we can use above command.
- Note: it will not delete the container that is running.
Deleting single container
- To delete the container we use
docker container rm @id/@name
- If you were following the tutorial you might have one container still running in background. Let's stop and delete this container.
Deleting Image
Deleting all dangling images
- To delete all dangling images we can use
docker image prune
- This will delete all dangling images.
- Dangling image = Images that doesn't have any containers.
Delete single image
- To delete an image we can run
docker image rm @id
.
What is Dockerfile
Dockerfile
is just a plain text file.- The plain text file contains series of instruction.
- Docker will read
Dockerfile
when you start the process of building an image. - It executes the instruction one by one in orderly fashion.
- At the end final read-only image is created.
- It's important to know that Docker uses images to run your code, not the
Dockerfile
. Dockerfile
is just a convenient way to build your docker images.- Default file name for
Dockerfile
isDockerfile
.
Building an image
docker build
- Even if you have single instruction
FROM
you can build your image. - We can simply execute
docker build -f @pathToDockerfile -t @name:@tag @contextPath
. @contextPath
is the path where you want docker build context path.-f
takes the path to Dockerfile. If your file exist within thedocker build
context path and your file name isDockerfile
you don't need to use-f
flag.-t
representstag
. This is so that you can name the image you are building.- If you omit the
tag
part, docker will add by default latest tag. - Notice when it builds an image, if you don't have the image that you used locally it will download it first.
➜ docker build -f docker/web-basic -t amantuladhar/doc-kub-ws:web-basic .
Sending build context to Docker daemon 17.02MB
Step 1/1 : FROM openjdk:8-jre-alpine
8-jre-alpine: Pulling from library/openjdk
6c40cc604d8e: Pull complete
e78b80385239: Pull complete
f41fe1b6eee3: Pull complete
Digest: sha256:712abc7541d52998f14b43998227eeb17fd895cf10957ad5cf195217ab62153e
Status: Downloaded newer image for openjdk:8-jre-alpine
---> 1b46cc2ba839
Successfully built 1b46cc2ba839
Successfully tagged amantuladhar/docker-kubernetes:dockerfile-basics
- Run
docker image list
and you will see your image on the list - (Psst!! You can go inside your image an play around with it)
FROM
FROM
is the first instruction inDockerfile
.- Sets the base image for every subsequent instruction.
- If you skip tag, docker will use latest tag.
- Latest tag may not be latest image.
- If we provide invalid tag, docker will complain.
FROM openjdk:8-jre-alpine
WORKDIR
- Sets starting point from where you want to execute subsequent instructions.
- We can set both absolute / relative path
- We can have multiple
WORKDIR
in images - If relative path is used, it will be relative to previous
WORKDIR
- Think of this as changing the directory
FROM openjdk:8-jre-alpine
WORKDIR /myApp
COPY
COPY @source @destination
- Copies files from source to container file system
- Source can be path or URL
- Path is relative to where build process was started.
- Both can contain wildcards like * and ?
- Let's add our app executable inside docker image.
- Before that you need app.
- You can create one yourself or I have a simple spring app here.
- For this exercise lets switch to branch dockerfile-initial.
- Let's build
jar
for our app../mvnw clean install
- Maven creates
jar
inside /target folder.
web-basic is a Dockerfile has complete instructions.
FROM openjdk:8-jre-alpine
WORKDIR /myApp
COPY ["onlyweb/target/*.jar", "./app.jar"]
- That's it. Build your image. Go inside an run the app
Sending build context to Docker daemon 17.03MB
Step 1/3 : FROM openjdk:8-jre-alpine
---> 1b46cc2ba839
Step 2/3 : WORKDIR /myApp
---> Running in 529b938e7bde
Removing intermediate container 529b938e7bde
---> a40643187675
Step 3/3 : COPY ["onlyweb/target/*.jar", "./app.jar"]
---> 63722ad415fe
Successfully built 63722ad415fe
Successfully tagged amantuladhar/doc-kub-ws:web-basic
Running your app
docker run -it amantuladhar/doc-kub-ws:web-basic
➜ docker run -it amantuladhar/doc-kub-ws:web-basic
/myApp # ls
app.jar
/myApp # java -jar app.jar
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.1.2.RELEASE)
...
...
2019-02-13 15:02:09.789 INFO 9 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path ''
- You still can't access the app from outside as you haven't asked docker to export the port the app is listening to.
ADD
- ADD @source @destination
- Exactly as COPY
- Has few features like, archive extractions. (Like extracting achieve automatically)
- Best practice is to use
COPY
if you don’t need those additional features.
EXPOSE
EXPOSE @port
EXPOSE
tells Docker the running container listens on specific network ports.- This acts as a kind of port mapping documentation that can then be used when publishing the ports.
EXPOSE
will not allow communication via the defined ports to containers outside of the same network or to the host machine.- To allow this to happen you need to publish the ports using
-p
options when running container
FROM openjdk:8-jre-alpine
WORKDIR /myApp
EXPOSE 8080
COPY ["onlyweb/target/*.jar", "./app.jar"]
Port Publishing
- If you want to access your app from the host (outside the container), you need to publish the port where your app is expecting a connection.
- To do that we have
-p @hostPort:@containerPort
option. - To access your app on localhost:9090
➜ docker run -it -p 9090:8080 amantuladhar/doc-kub-ws:web-basic
# Running your app inside the container
/myApp # java -jar app.jar
....
....
2019-02-14 04:47:05.036 INFO 10 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path ''
2019-02-14 04:47:05.046 INFO 10 --- [ main] c.e.cloudnative.CloudNativeApplication : Started CloudNativeApplication in 4.73 seconds (JVM running for 5.685)
- Here's a response that we get when we call
localhost:9090/test
➜ http localhost:9090/test
HTTP/1.1 200
Content-Type: application/json;charset=UTF-8
Date: Thu, 14 Feb 2019 04:49:07 GMT
Transfer-Encoding: chunked
{
"app.version": "v1-web",
"host.address": "172.17.0.2",
"message": "Hello Fellas!!!"
}
RUN
RUN @command
RUN
is central executing instruction forDockerfile
.RUN
command will execute a command or list of command in a new layer on top of current image- Resulting image will be new base image for subsequent instruction
- To make your
Dockerfile
more readable and easier to maintain, you can split long or complexRUN
statements on multiple lines separating them with a backslash (\
) - Lets install
curl
in our image. (We will need to usecurl
later) RUN apk add --no-cache curl
FROM openjdk:8-jre-alpine
WORKDIR /myApp
EXPOSE 8080
RUN apk add --no-cache curl
COPY ["onlyweb/target/*.jar", "./app.jar"]
- If you build your app now, you will have
curl
command when you run the container. - You can test if
curl
exist if you want.
➜ docker build -f docker/web-basic -t amantuladhar/doc-kub-ws:web-basic
...
Step 4/5 : RUN apk add --no-cache curl
---> Running in 883ac8f78866
fetch http://dl-cdn.alpinelinux.org/alpine/v3.9/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.9/community/x86_64/APKINDEX.tar.gz
(1/4) Installing nghttp2-libs (1.35.1-r0)
(2/4) Installing libssh2 (1.8.0-r4)
(3/4) Installing libcurl (7.63.0-r0)
(4/4) Installing curl (7.63.0-r0)
...
ENTRYPOINT
- The
ENTRYPOINT
specifies a command that will always be executed when the container starts. - Docker has a default
ENTRYPOINT
which is/bin/sh -c
ENTRYPOINT ["executable", "param1", "param2"]
is the exec form, preferred and recommended.ENTRYPOINT command param1 parm2
is a shell form.ENTRYPOINT
is the runtime instruction, butRUN
,ADD
are build time instructions.- We can use
--entrypoint
option to override when you run the container.
FROM openjdk:8-jre-alpine
WORKDIR /myApp
EXPOSE 8080
RUN apk add --no-cache curl
COPY ["onlyweb/target/*.jar", "./app.jar"]
ENTRYPOINT ["java", "-jar", "app.jar"]
- Build the image and run the image, without
-it
option. docker run -p 9090:8080 amantuladhar/doc-kub-ws:web-basic
- You app will run, you didn't needed to go inside the container and execute the command yourself.
-p 9090:8080
was added so that you can access your app from host.
CMD
CMD
also specifies a command that will execute when container starts.- The
CMD
specifies the arguments that will be fed to theENTRYPOINT
. CMD ["executable","parameter1","parameter2"]
this is a so called exec form. (Preferred)CMD command parameter1 parameter2
this a shell form of the instruction.- Like
ENTRYPOINT
CMD
is a runtime instruction as well. - You already know how to override the
CMD
, just pass it after the image name when you run the container.
FROM openjdk:8-jre-alpine
WORKDIR /myApp
EXPOSE 8080
RUN apk add --no-cache curl
COPY ["onlyweb/target/*.jar", "./app.jar"]
ENTRYPOINT ["java", "-jar"]
CMD ["app.jar"]
- As you can see,
ENTRYPOINT
defines the command that gets executed when container starts. CMD
is passing argument toENTRYPOINT
.- Build and run the app.
docker run -p 9090:8080 amantuladhar/doc-kub-ws:web-basic
.
Overriding CMD
docker run @image@command @arguments
.docker run -p 9090:8080 amantuladhar/doc-kub-ws:web-basic test.jar
.- Of course above command won't run because we don't have
test.jar
in our image.
➜ docker run -p 9090:8080 amantuladhar/doc-kub-ws:web-basic test.jar
Error: Unable to access jarfile test.jar
- How about we try to attach container terminal to host terminal using
-it
docker run -it amantuladhar/doc-kub-ws:web-basic
won't work as this will run the app.docker run -it amantuladhar/doc-kub-ws:web-basic sh
won't work as it internally runjava -jar sh
.- If you really want to attach container terminal to host terminal you need to override the
ENTRYPOINT
. docker run --entrypoint sh -it amantuladhar/doc-kub-ws:web-basic
➜ docker run --entrypoint sh -it amantuladhar/doc-kub-ws:web-basic
/myApp #
Understand Docker Layer
Docker layers?
- Layer is basically a change on an image, or an intermediate image.
- Every instructions you specify (FROM, RUN, COPY, etc.) in your Dockerfile causes the previous image to change, thus creating a new layer.
- That image layer then becomes the parent for the layer created by the next instruction.
- Final image and/or image that we use consists of a series of layers which are stacked, one on top of the another.
- You can think of it as staging changes when you're using git. You add a file's change, then another one, then another one.
- When you build your image before, you also created lots of images.
docker build . -t docker-kubernetes:dockerfile-basics
+---------------------------------+
+------------> | 6e384ad670e7 |
| | CMD ["java", "-jar", "app.jar"] |
| +---------------------------------+
+------------> | adfedcd08e78 |
| |COPY ["target/*.jar", "./app.jar"|
| +---------------------------------+
+------------> | aae885b8525e |
Layer Id+----+ | RUN apk add --no-cache curl |
| +---------------------------------+
+------------> | 14446213dae8 |
| | EXPOSE 8080 |
| +---------------------------------+
+------------> | a40643187675 |
| | WORKDIR /myApp |
| +---------------------------------+
+------------> | 1b46cc2ba839 |
| FROM openjdk:8-jre-alpine |
+---------------------------------+
Union File System
How Union Filesystem works?
- By using the union filesystem, Docker combines all these layers into a single image entity.
- Filesystem structure of the top layer will merge with the structure of the layer beneath.
- Files and directories which have the same path as in the previous layer will override those beneath.
+-----------------+
| FILE_4.txt |----+
+-----------------+ |
| +---------------+
| | |
+-----------------+ | | FILE_1.txt |
| FILE_2.txt |---------> | FILE_2.txt |
| FILE_3.txt | | | FILE_3.txt |
+-----------------+ | | FILE_4.txt |
| | |
| +---------------+
+-----------------+ |
| FILE_1.txt |----+
| FILE_2.txt |
+-----------------+
How are layers connected?
- To maintain the order of layers, Docker utilizes the concept of layer IDs and pointers.
- Each layer contains the ID and a pointer to its parent layer.
- A layer without a pointer referencing the parent is the first layer in the stack, a base.
- Layers are reusable and cacheable. If same kind of layers are found, docker will reuse it.
- Reusable layers is also a reason why Docker is so lightweight in comparison to full virtual machines, which don't share anything. It is thanks to layers that when you pull an image, you eventually don't have to download all of its filesystem
+---------------------------+
| Layer Id |
| |
| Parent Id |
+------------|--------------+
|
+------------v--------------+
| Layer id |
| |
| Parent Id |
+------------|--------------+
|
+------------v--------------+
| Layer id |
| |
| Parent Id |
+------------+--------------+
Health Check
What is HEALTHCHECK
- The HEALTHCHECK instruction tells Docker how to test a container to check that it is still working.
- There could be a scenario where our application process is still running, but may not respond to our request.
- HEALTHCHECK instruction helps us to identify this issue.
- We can have multiple HEALTHCHECK instruction in our Dockerfile but only last one will take an effect.
- When a container has a HEALTHCHECK specified, it has a health status in addition to its normal status.
Note: Kubernetes doesn't support this HEALTHCHECK instruction. Kubernetes has its own way to checking if containers are healthy.
HEALTHCHECK
- The HEALTHCHECK instruction can be used in two forms.
- HEALTHCHECK [OPTIONS] CMD command. Which checks container health by running a command inside the container.
- HEALTHCHECK NONE. This will disable any health check inherited from the base image.
- If you are using the app from Simple Spring App, make sure you are in dockerfile-initial branch.
- We have couple of REST endpoints in the app
/test/
which kind of hello-world endpoint for us/test/status-5xx
which will return status 200 for first 5 times, but after that it will throw exception. Our app will still be running./test/exit-1
to exit the app with non-zero status code./test/exit-0
to exit the app with zero status code.
- Let's add HEALTHCHECK in our image.
HEALTHCHECK CMD curl --fail http://localhost:8080/test || exit 1
- We are making a curl request to our app and if it fails we are exiting our app with status code 1.
FROM openjdk:8-jre-alpine
WORKDIR /myApp
EXPOSE 8080
RUN apk add --no-cache curl
COPY ["onlyweb/target/*.jar", "./app.jar"]
CMD ["java", "-jar", "app.jar"]
HEALTHCHECK CMD curl --fail http://localhost:8080/test || exit 1
- Let's build and run this image.
IMAGE STATUS
amantuladhar/doc-kub-ws:web-basic Up XX seconds (healthy)
- We will play around with HEALTHCHECK more, but first let's learn about different options HEALTHCHECK supports.
--interval
- --interval=@duration
- Default duration time is 30s
- This configures time HEALTHCHECK needs to wait before it runs next HEALTHCHECK CMD.
--timeout
- --timeout=@duration
- Default timeout is 30s
- This configures the time it should take a HEALTHCHECK CMD to finish.
- If a single CMD run takes longer than timeout seconds then the check is considered to have failed.
--retries
- --retries=@numberOfRetries
- Default value for retry is 3.
- This configures how many times HEALTHCHECK needs to fail before it determines out container is unhealthy.
--start-period
- --start-period=@duration
- Default value for start period duration is 0s.
- With is option we can configure time it will take for app to run.
- If we run pre-mature HEALTHCHECK our app might not have initialized and it may result in our container be in unhealthy state.
- Basically it will run our first HEALTHCHECK CMD after time specified.
Playing with HEALTHCHECK
- Let's update our HEALTHCHECK instruction.
FROM openjdk:8-jre-alpine
WORKDIR myapp/
COPY ["onlyweb/target/*.jar", "app.jar"]
EXPOSE 8080
RUN apk add --no-cache curl
CMD ["java", "-jar", "app.jar"]
HEALTHCHECK --start-period=10s \
--interval=5s \
--timeout=2s \
--retries=3 \
CMD curl --fail http://localhost:8080/test/status-5xx || exit 1
- Build you image and run it. And keep an eye on health status.
- Remember status-5xx will return status 200 for first 5 calls
- So for a minute or so our container health will be healthy.
- But because the HEALTHCHECK is called frequently, and for our app at exactly 6th call it will start to fail.
- On a 6th call it won't flag our container as unhealthy as we defined retries.
- Docker makes sure that three consecutive failure happens before it flags the container as unhealthy.
IMAGE CREATED STATUS
amantuladhar/doc-kub-ws:web-basic 10 seconds ago Up 10 seconds (health: starting)
IMAGE CREATED STATUS
amantuladhar/doc-kub-ws:web-basic 24 seconds ago Up 24 seconds (healthy)
IMAGE CREATED STATUS
amantuladhar/doc-kub-ws:web-basic 2 minutes ago Up 2 minutes (unhealthy)
Complete dockerfile for with health check web-healthcheck web-healthcheck
Restart Policies
- Restart policies is used to control whether the containers starts automatically when they exit, or even when Docker restarts.
- Restart policies tells Docker how to react when a container shuts down.
- A restart policy only takes effect after a container starts successfully.
- Starting successfully means that the container is up for at least 10 seconds and Docker has started monitoring it.
- This prevents a container which does not start at all from going into a restart loop.
- By using the
--restart
option with the docker run command you can specify a restart policy. - Types of restart policies
no
always
no-failure
unless-stopped
no
- The no policy is the default restart policy and simply will not restart a container under any case.
always
- If we wanted the container to be restarted no matter what exit code the command has, we can use the
always
restart policy. - The restart policy will always restart the container.
- Whenever the Docker service is restarted, containers using the always policy will also be restarted.
on-failure
- Restart your container whenever it exits with a non-zero exit status and not restart otherwise.
- You can optionally provide a number of times for Docker to attempt to restart the container.
--restart=on-failure:@number
- Docker uses a delay between restarting the container, to prevent flood-like protection.
- This is an increasing delay; it starts with the value of 100 milliseconds, then Docker will double the previous delay.
- If a container is successfully restarted, the delay is reset to the default value of 100 milliseconds
unless-stopped
- Again, similar to always, if we want the container to be restarted regardless of the exit code, we can use
unless-stopped.
- It will restart the container regardless of the exit status, but do not start it on daemon startup if the container has been put to a stopped state before.
- If you are using the app from Simple Spring App, make sure you are in
dockerfile-initial
branch. - We have couple of REST endpoints in the app
/test/
which kind of hello-world endpoint for us/test/status-5xx
which will return status 200 for first 5 times, but after that it will throw exception. Our app will still be running./test/exit-1
to exit the app with non-zero status code./test/exit-0
to exit the app with zero status code.
Restart Policies in Action
- Let's start our app using
on-failure
- Here's a Dockerfile content
- Run the image with
--restart
option.
docker run -d -p 9090:8080 --restart\=on-failure amantuladhar/doc-kub-ws:web-basic
IMAGE STATUS
amantuladhar/doc-kub-ws:web-basic Up About a minute
- Let's call
/test/exit-1
, which will end the process with non zero exit code.
IMAGE STATUS
amantuladhar/doc-kub-ws:web-basic Restarting (1) 1 second ago
- Now let's call
/test/exit-0
, which will end the process with zero exit code. - If you list the running container, you won't see it now.
Update restart policies of running container
docker update --restart=new-policies @id/@name
- Where
@id/@name
is of container
Docker Volume
- Volumes are directories (or files) that are outside of the default Union File System and exist as normal directories and files on the host filesystem.
Why do we need volume?
-
To understand why we need volumes lets go back to simple image
busybox
. -
docker run -it busybox
- Create a directory and add a file.
touch test.txt
- Run new container with same image, list the files and you won't see the file you created.
- Why not run same container? If we run same container we will still see that file.
- Remember container are ephemeral and disposable.
- We should not depend on starting same container.
- Create a directory and add a file.
-
How do we persist changes, so that when we run new container we see that file?
Named Volume
- The most direct way is declare a volume at run-time with the
-v @name:@path
flag. @name
is name of the volume.@path
is the path inside container.
-v @name:@path
docker run -v myVolume:/my-volume --it busybox
- This will make the directory
/my-volume
inside the container live outside the UFS and directly accessible on the host. - Any files that the image held inside the
/my-volume
directory will be copied into the volume. - If the container path specified doesn't exist docker will create one for you.
Where can I find the file in host system?
- We can see where docker is storing our file by inspecting the container.
docker inspect @container_id
- If you see "Mounts" property, you will see properties like
- "Source" is path in the host
- "Destination" is path in the container
- If you try to navigate to path specified by "Source" in you computer, you won't find it. Why?
- For docker that host path is inside Virtual Machine.
- We can also see
docker volume list
to list all volumes. - As you can see volume name is big hash at the moment
Manually create a named volume
- To manually create a named volume, you can use
docker volume create
command.
docker volume create --name @volume\_name
Reattaching same volume
- If you attach a same volume when running new container, you will find all previous files.
docker run -v myVolume:/my-volume -it busybox
# Create a file inside my-volume directory
> touch my-volume/test.txt
# List content of my-volume directory
> ls my-volume/
test.txt
- Running new container with same volume
docker run -v myVolume:/my-volume -it busybox
# List content of my-volume directory
> ls my-volume/
test.txt
Mount host path
-v
has yet another major use case, which can only be accomplished through-v
option.- Mount the host directory into container.
docker run -v @host_path:@container_path -it busybox
- host path must be absolute.
- container path is absolute as well.
docker run -v $(pwd)/volume/:/my-volume -it busybox
- We can use
$(pwd)
to get absolute path for current directory. - With above command, we are mounting
/volume
directory that relative to where build process was started.
Simple example
- Let's try this with
busybox
first. docker run -v $(pwd)/volume/:/my-volume -it busybox
- Create a file under
/my-volume
inside the container. - You will see that file in host directory as well, under
"CURRENT_DIR"/volume
.
Using volume for persistence storage
- Using volume on these case doesn't make sense.
- We are creating a file and checking if it exists in subsequent runs.
- But consider this scenario when you run
mysql
image. - You would want to persist the data that you created inside container.
- It doesn't matter how many times you restart your container, we must not lose the data.
Using to persist mysql
data
mysql
stores its data inside/var/lib/mysql/
.- So if we map our host path with
/var/lib/mysql/
, we will be able to persistmysql
data. - If you try to run the
mysql
image it expects couple of Environment Variable.- MYSQL_ROOT_PASSWORD
- MYSQL_DATABASE
docker run \
-v $(pwd)/volume/mysql-data:/var/lib/mysql/ \
-e MYSQL_ROOT_PASSWORD=test \
-e MYSQL_DATABASE=test \
mysql:5.7
I am using
-e
options to pass in Environment Variables to container.
- I am deliberately not passing
-it
when running container. - Default
CMD
formysql
image is used for starting mysql - If we override it my passing Shell executable, to attach the pseudo tty, it won't start the mysql.
- After container is running, we can attach container tty into host terminal.
- You already learned this but in case you forgot. To attach the pseudo tty for a running container we use
docker container exec -it @container(id/name) sh
docker container exec -it 750d0ddbbe51 sh
- Login to
mysql
from container terminal. - Create a table and insert some data into that table.
> mysql -u root -p
mysql> USE test;
Database changed
mysql> CREATE TABLE student(id INT(11), name VARCHAR(255));
Query OK, 0 rows affected (0.02 sec)
mysql> INSERT INTO student VALUES (1, 'Aman');
Query OK, 1 row affected (0.01 sec)
mysql> SELECT * FROM student;
+------+------+
| id | name |
+------+------+
| 1 | Aman |
+------+------+
1 row in set (0.00 sec)
- Now you stop the running container and run a new one, with same path (
$(pwd)/volume/mysql-data
) mapped to/var/lib/mysql/
- You will see data is still present.
docker run \
-v $(pwd)/volume/mysql-data:/var/lib/mysql/ \
-e MYSQL_ROOT_PASSWORD=test \
-e MYSQL_DATABASE=test \
mysql:5.7
Docker Network
Why do we need network?
- To see why we need inter network communication, let's run two container (in background).
➜ docker run \
-p 9090:8080
-d amantuladhar/doc-kub-ws:web-basic
- Let's try to access our app from different container
➜ docker run \
-it amantuladhar/network-tools sh
network-tools
image have few tools likecurl
,http
,ping
,nc
.
- From this container, let's call
http localhost:8080/test
. - It won't work because there is no connection between those two containers.
- Consider scenario where our app needs to talk with container that is running mysql.
Listing network
docker network list
- If you do
docker network list
, you will see three networks is already created- bridge
- host
- none
- These are networks created by different network drivers docker supports. (We have few more drivers).
- Default driver for network is
bridge
.
Create a network
- To create a network you just issue
docker network create @name
.
docker network create myNet
- This will create network with name
myNet
with driverbridge
.
Using network
Define which network to connect to
--net=@network_name
syntax.- To use the network we created we can use
--net
option.
➜ docker run \
--name=myApp
--net=myNet
-p 9090:8080
-d amantuladhar/doc-kub-ws:web-basic
- Notice I am naming my container myself using
--name
option. You will know in a bit why I am doing this. - This will run our container in our network
myNet
. - Now lets run
network-tools
image on same network.
➜ docker run \
--name=myTester \
--net=myNet
-it amantuladhar/network-tools sh
- Try to execute
curl
onlocalhost:8080/test
. orlocalhost:9090/test
- It won't work. Why? We explicitly ran both of your container on same network.
Container Discovery
- To access the app running on same network, we cannot use localhost.
- I think this is obvious. From the application standpoint they are not running on same computer*.
- If we can't use localhost what do we use?
- Turns out we have to use container name i.e
myApp
in our case. - If you do
curl
onmyApp:8080/test
it will work.
Docker has builtin DNS server that converts container name to its IP.
Auto connect to specified container network
--net
has a syntax that supports auto network discovery i.e--net=container:name
.- If you use this syntax, docker will automatically connect network that is used by specified container.
- If you use this syntax, you will be able to use
localhost
instead ofcontainer_name
. - Run the image that has our app like before.
➜ docker run \
--name=myApp
--net=myNet
-p 9090:8080
-d amantuladhar/doc-kub-ws:web-basic
- Run the tester again with
container
syntax.
docker run \
--name=myTester \
--net=container:myApp \
-it \
amantuladhar/network-tools \
sh
- Test using localhost, and it just works.
/ # curl localhost:8080/test | jq .
{
"message": "Hello Fellas!!!",
"host.address": "172.23.0.3",
"app.version": "v1-web"
}
- As exercise, you can try to create application that stores some values in database.
- You can use
amantuladhar/docker-kubernetes:v1-db
image.- Simple Web App at branch initial-db
- App expects that you will set
DB_URL
,DB_USER
andDB_PASS
environment variable, which is a full connection URL for mysql.
Docker Compose
What is docker compose?
docker compose
is tool for defining and running complex applications with Docker.- With Compose, we define a multi-container application in a single file.
- It allows you to spin your application up in a single command.
- Orchestration between two containers are easy with docker compose.
- Default file name for docker-compose is is
docker-compose.yml
.
Docker Compose works in the scope of a single host, where Kubernetes works on multi cluster environment.
Simple compose file
- This is what a simple compose file looks like.
version: '3' # 1
services: # 2
myApp: # 3
image: amantuladhar/docker-kubernetes:v1-web # 4
#1
specifies what version of compose API you want to use.#2
is a root property. It includes list of all services (containers) compose will manage.#3
is name of the service (container)#4
specifies image you want to run.
If you want to build you app yourself you can find the app in Spring Simple Web branch
compose-initial
.
- You noticed that in compose file containers are called services.
- Compose can manage multiple services, but for now we are dealing with only one.
- We defined the service called
myApp
which will run the image.
Build your image
- If you want to build your own image instead of using one, you can do that easily too
version: '3'
services:
myApp:
build: # 1
context: . # 2
dockerfile: Dockerfile # 3
#1
build tag will let compose know that you want to build your image.#2
context defines where your docker file are stored.#3
Dockerfile takes the file name forDockerfile
. If the file name isDockerfile
you don't seed to specify the property.- You use
docker-compose build
to build your image. (docker compose build in the latest version) - If you make any changes to
Dockerfile
you need to explicitly build the image again.
Any docker compose command can now be used without
-
i.e as part of docker command
Running services
- To run the services defined in compose file is super easy.
- You run any number of services defined inside compose file with single command.
- i.e
docker-compose up
. On latest docker for desktop installation you can justdocker compose up
. - If you want to run it in background add
-d
option. - If you run the app right now, it will run but it won't be accessible from our host yet.
- We need to publish ports first in order to access the app.
Stopping services
- Starting the services were easy. How about stopping them?
- It is super easy as well.
docker-compose stop
. (docker compose stop
in new version). - It will stop all running services at once.
Docker Compose Publishing Ports
Publishing Ports
- We made our app run with single command, but we were not able to access them.
- To access our app we needed to publish the ports.
- We know for
docker run
command we use-p
option. - On compose we use
ports
property
version: '3'
services:
myApp:
image: amantuladhar/docker-kubernetes:v1-web
ports:
- 9090:8080
- You can publish any number of ports you want as you would with
-p
option. - You may have a hint on what docker-compose is trying to achieve.
- If we publish ports with
docker run
, we need to specify that in every run. - But with compose, you don't need it.
- We define the options in
yaml
file, and we simply usedocker-compose up
. - If you run your application now, you will be able to access it from the host.
Docker Compose Logs
Accessing logs
- If you run the app using
-d
option, you won't see the outputs (logs) on the screen. - If you want to access them or see them, you use
docker-compose logs @option @service
command. - If you run
docker-compose logs
it will show you all the logs for allservices
, but it won't show you new ones. - If you want to follow the logs in realtime you use
-f
option.docker-compose logs -f
. - If you have multiple services, and you want to see logs for only single service, you can use:
docker-compose logs @options @serviceName
syntax.
Multiple Services
Defining Multiple Services
- Docker compose shine when you want to run multi-container app.
- Even if you want to run multiple container, you can run your app using single command.
- Let's add mysql service in our compose file.
We will define
mysql
service but our app won't try to interact with database at the moment.
services:
myApp:
image: amantuladhar/docker-kubernetes:v1-web
ports:
- 9090:8080
myDb: # 1
image: mysql:5.7 # 2
environment: # 3
- MYSQL_ROOT_PASSWORD=password # 4
- MYSQL_DATABASE=docker_kubernetes # 4
#1
name of new service.#2
image you want to run. You can usebuild
if you want to.#3
Passing environment variables to image. Like we did with-e
option.#4
List of environment variables- One new thing in this service is that we are setting environment variables in the file.
- Remember when you had to do this in command, those are old days now :).
Listing the running services
- If you want to list the running services you can use
docker-compose ps
Attach mysql
terminal
docker-compose exec @servicename @cmd
- If you want to attach mysql pseudo tty to host, we can do that too.
docker-compose exec
by default attaches the pseudo tty so we don't need to add-it
option.- If you don't want to attach pseudo-tty you can use
-T
option. - For now let's use
docker-compose exec myDb sh
- Play around with the service, but remember data won't be persisted in next run
Volumes
- We leaned how we can attach volumes using
docker run
command. - There was one problem, we had to use very long command.
- Using big command is not a problem, if we use it only once or twice.
- But we want to be able to run container multiple times.
- You already know
docker-compse
tries to simply process of running container. - Compose supports volumes as well.
- We simply use
volumes
properties for a service
version: '3'
services:
myApp:
image: amantuladhar/docker-kubernetes:v1-web
ports:
- 9090:8080
myDb:
image: mysql:5.7
environment:
- MYSQL_ROOT_PASSWORD=password
- MYSQL_DATABASE=docker_kubernetes
volumes: # 1
- "./volume/mysql-compose/:/var/lib/mysql/" # 2
#1
we use volumes properties to map the volumes#2
In this case we are mapping host volume to/var/lib/mysql
.- Play around with the service.
- Add a table, insert a data then, dispose all services and start up again.
- Check if all data are still there.
Different ways to map volumes
volumes:
# Just specify a path and let the Engine create a volume
- /var/lib/mysql
# Specify an absolute path mapping
- /opt/data:/var/lib/mysql
# Path on the host, relative to the Compose file
- ./cache:/tmp/cache
# Named volume
- datavolume:/var/lib/mysql
Named volumes
- If you want to use named volume in docker-compose, you don't need to create one yourself.
- You have
volumes
root property where you can define your volumes.
version: '3'
services:
myApp:
image: amantuladhar/docker-kubernetes:v1-web
ports:
- 9090:8080
myDb:
image: mysql:5.7
environment:
- MYSQL_ROOT_PASSWORD=password
- MYSQL_DATABASE=docker_kubernetes
volumes:
- "./volume/mysql-compose/:/var/lib/mysql/"
- "namedVolume:/test/path" # 3
volumes: # 1
namedVolume: # 2
#1
rootvolumes
property, where we can define n number for volumes.#2
name of the volume. You can add some properties inside namedVolume too.
Network
- Networks are also easy concept, we are not going to learn new stuff here.
- We will just learn how to do same stuff, but more efficiently.
- But at the end you will create an app that talks with database.
- And most of all database values will be persisted.
Setting Up Network
- If you were paying attention to logs when we run our services, you saw that
docker-compose
creates adefault
network for us. - So all services defined on same file are already in same (
default
) network.
version: '3'
services:
myApp:
image: amantuladhar/docker-kubernetes:v1-web
ports:
- 9090:8080
myTester: # 1
image: amantuladhar/network-tools # 2
command: sleep 50000 # 3
myDb:
image: mysql:5.7
...
...
#1
Define one more servicemyTester
#2
Use the imageamantuladhar/network-tools
. Remember when we learned networks, we used this to test connection between two containers.#3
Overriding defaultcommand
for an image, as this image doesn't have long-running process. This is so that it runs for a long period of time, and we can go inside the service and test our network.- I believe this is nothing new, we already did this before.
- Only difference is that, now we are using compose, and it's convenience.
Testing Network
- Attach pseudo-tty of
myTester
.
docker-compose exec myTester sh
- Test the network like we did before.
- We have to use
service_name
to communicate with other container. localhost
will not work. We will learn how we can achieve this later.
/ # curl myApp:8080/test | jq .
{
"message": "Hello Fellas!!!",
"host.address": "192.168.48.2",
"app.version": "v1-web"
}
Named Network
- While relying on
default
network works, you may want to name your network yourself. - You may even want to create multiple networks.
- To achieve that we have networks root properties.
- If we use it as a root property, we can define any number custom networks we want.
- If used inside services property, it will make that particular service reside in the network.
version: '3'
services:
myApp:
image: amantuladhar/docker-kubernetes:v1-web
ports:
- 9090:8080
networks: # 3
- myNet # 4
myTester:
image: amantuladhar/network-tools
command: sleep 50000
networks: # 3
- myNet # 4
myDb:
image: mysql:5.7
...
...
networks: # 1
myNet: # 2
...
...
#1
this is a root property. This includes list of custom network you want to create.#2
name of network you want to create.#3
this is used inside services. This defines which network particular service communicates with.#4
Name of networks service will use. You can add multiple networks- You can check if two containers can talk with each other like before.
Using network_mode
- With help of
network_mode
property we can define the way container network are configured. - Remember
--net=container:@name
syntax.network_mode
can achieve something like this. network_mode
supports- "bridge"
- "host"
- "none"
- "service:@name"
- "container:@id/@name"
- For now, we will use
serrvice:@name
mode. Which I think will often be used than others.
version: '3'
services:
myApp:
image: amantuladhar/docker-kubernetes:v1-web
ports:
- 9090:8080
networks:
- myNet
myTester:
image: amantuladhar/network-tools
command: sleep 50000
network_mode: service:myApp # 1
myDb:
image: mysql:5.7
...
...
...
#1
This defines that we want to use same network that is being used by servicemyApp
.- We are using
networks
properties for**myApp**
, to define network for service. - Notice we didn't even have to define
networks
properties formyTester
. - When defining a service you cannot use
ports
andnetwork_mode
properties at the same time. - Testing using
localhost
inmyTester
/ # curl localhost:8080/test
{
"message": "Hello Fellas!!!",
"host.address": "192.168.112.2",
"app.version": "v1-web"
}
Web App With Database
- You now have knowledge of how
docker-compose
and its component works. - Let's create a web app that connects with database.
- You can use
amantuladhar/docker-kubernetes:v1-db
image to get the app that tries to connect with database.- This tag expects three environment variable:
DB_URL
,DB_USER
,DB_PASS
.
- This tag expects three environment variable:
- Web App With Database Example branch
compose-db-initial
if you want to build it yourself. - Here's a compose file we are initially working with
version: '3'
services:
myApp:
image: amantuladhar/docker-kubernetes:v1-db
ports:
- 9090:8080
myDb:
image: mysql:5.7
environment:
- MYSQL_ROOT_PASSWORD=password
- MYSQL_DATABASE=docker_kubernetes
volumes:
- "./volume/mysql-compose/:/var/lib/mysql/"
networks:
myNet:
- First step is to add network on both the services. (Or you can use default network).
version: '3'
services:
myApp:
image: amantuladhar/docker-kubernetes:v1-db
ports:
- 9090:8080
networks: # 1
- myNet # 1
myDb:
image: mysql:5.7
networks: # 1
- myNet # 1
environment:
- MYSQL_ROOT_PASSWORD=password
- MYSQL_DATABASE=docker_kubernetes
volumes:
- "./volume/mysql-compose/:/var/lib/mysql/"
networks:
myNet:
#1
added network to our services.- If you are using image above, it expected three stuff so that it can successfully connect with database.
DB_URL
Connection URLDB_USER
Database UserDB_PASS
Database User Password
version: '3'
services:
myApp:
image: amantuladhar/docker-kubernetes:v1-db
ports:
- 9090:8080
networks:
- myNet
environment: # 1
- DB_URL=jdbc:mysql://myDb:3306/docker_kubernetes?useSSL=false # 2
- DB_USER=root # 2
- DB_PASS=password # 2
myDb:
image: mysql:5.7
networks:
- myNet
environment:
- MYSQL_ROOT_PASSWORD=password
- MYSQL_DATABASE=docker_kubernetes
volumes:
- "./volume/mysql-compose/:/var/lib/mysql/"
networks:
myNet:
#1
Usingenvironment
property to set our environment variable.#2
Setting necessary environment variables.- If you run the services now you should be able to start your app and run it.
Managing service start order
- When using above compose file, it is not necessary that database starts first and web later.
- Docker compose doesn't know which service to start first.
- If our web app starts first and then database, clearly our app will fail.
- To minimize such issues
docker-compose
hasdepends_on
property that we can use. - We can add list of services particular service depends on and
docker-compose
will take care of starting them in order.
depends_on
does not wait formysql
to be ready before starting web only until they have been started.
version: '3'
services:
myApp:
image: amantuladhar/docker-kubernetes:v1-db
...
...
depends_on: <-------------
- myDb
myDb:
image: mysql:5.7
networks:
- myNet
...
...
networks:
myNet:
Restart Policy
restart
property accepts the same values we learned before.- on-failure
- unless-stopped
- always
- none
version: '3'
services:
myApp:
...
restart: on-failure
...
Health Check
healthcheck
- We also have healthcheck property if we want to override the healthcheck of Image.
- Like HEALTHCHECK instruction, we can provide different option to configure our healthcheck
- interval
- timeout
- retries
- start_period
version: '3'
services:
myApp:
...
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/test"]
interval: 10s
timeout: 5s
retries: 3
start_period: 20s
Maven Docker Plugin
Docker with Maven Basics
- We learned
docker
anddocker-compose
basics. - We can easily containerize our app, even if it needs database support.
- While creating images manually by yourself can be done, more human interaction leads to more errors.
- Delegating the image
build
process to maven gives you a lot of flexibility and also saves a lot of time. - This can be especially useful, if you have continuous integration flow set up, using Jenkins for example.
- There are multiple maven plugins for docker. We will be working with
- fabric8io/docker-maven-plugin
- I like this one as this has lots of options and is highly configurable.
- So much that you don’t even need a Dockerfile to build your own image when using this plugin.
For this chapter we will use Spring Simple Web App -
dockerfile-initial-complete
branch.
New Maven Goals
- If we use fabric8io/docker-maven-plugin, we get few new maven goals.
- Some important one that we will cover
docker:build
: This will build the docker image from the configuration we add topom.xml
.docker:start/stop
: Run/Stop the image built from configuration.docker:remove
: This is for cleaning up the images and containers.docker:logs
: This prints out the output of the running containers.
Using Plugin
- To use the plugin in you app, you need to add fabric8io/docker-maven-plugin to you
pom.xml
.
<build>
<plugins>
<plugin>
<groupId>io.fabric8</groupId>
<artifactId>docker-maven-plugin</artifactId>
<version>0.28.0</version>
</plugin>
</plugins>
</build>
- This plugin supports lots of configuration, and we put the configuration inside
<configuration>
tag.
<build>
<plugins>
<plugin>
<groupId>io.fabric8</groupId>
<artifactId>docker-maven-plugin</artifactId>
<version>0.28.0</version>
<configuration>
<verbose>true</verbose>
<images>

</images>
</configuration>
</plugin>
</plugins>
</build>
- The important part in the plugin definition is the
<configuration>
element. - It has
<images>
element which allows us to add multiple
- To build the dockerfile you just need to run
./mvnw clean package docker:build
. It will build your app. - Default value for
<dockerFileDir>
is/src/main/docker
. - We have our
Dockerfile
inside project root directory, so we are overriding the value.
Run using docker:start
- It is as simple as
./mvnw package docker:start
.
...
[INFO] --- docker-maven-plugin:0.28.0:start (default-cli) @ cloudnative ---
[INFO] DOCKER> [docker-kubernetes:v1-web]: Start container b9cbcab4c5a6
...
- If you list the running container you can see that one container is running.
- Of course, we cannot access the app, we haven't published the port yet.
Stop using docker:stop
- To stop the running container you can use
./mvnw package docker:stop
...
[INFO] --- docker-maven-plugin:0.28.0:stop (default-cli) @ cloudnative ---
[INFO] DOCKER> [docker-kubernetes:v1-web]: Stop and removed container b9cbcab4c5a6 after 0 ms
...
Port Publishing
- We can publish the port easily as well.
- For this plugin we add configuration inside
<run>
tag. - Inside
<run>
tag we use<ports>
tag to expose ports. - Every port mapping are put inside
<port>
tag.

build
andrun
you app again and you will be able to access it from the host this time.
Logs
- If you want to see the logs just use
docker:logs
goal.
Removing images
./mvnw clean package docker:remove
- This goal can be used to clean up images and containers.
- By default, all images with a build configuration are removed.
Build image without Dockerfile
- The plugin we are using supports creating
Dockerfile
in memory. - Inside the
<build>
tag, we can writeDockerfile
like instructions.

-
<from>
is same asFROM
instruction. -
<assembly>
defines how build artifacts and other files can enter the Docker image. Using this we don't need to useCOPY
to copy our build artifact. -
Inside
<assembly>
we are using<descriptorRef>
which has some predefined ways to copy files to image.artifact-with-dependencies
: Attaches project’s artifact and all its dependencies.artifact
: Attaches only the project’s artifact but no dependencies.project
: Attaches the whole Maven project but without the target/ directory.rootWar
: Copies the artifact as ROOT.war to the exposed directory. I.e. Tomcat will then deploy the war under the root context.
-
We are using
artifact
pre-defined descriptor in our configuration, because we want to copy ourjar
file only. -
<cmd>
is same asCMD
instruction. -
Build and run your app again. You will be able to access your app from the host.
Health Check
Run Command
- How to add
RUN
instruction when usingin-memory
Dockerfile.
<build>
...
<runCmds>
<run>apk add --no-cache curl</run>
</runCmds>
<cmd>java -jar maven/${project.name}-${project.version}.jar</cmd>
</build>
<runCmds>
can have multiple<run>
tag.- Here we are installing
curl
.
Restart Policy
- If you want to restart policies for your images, you can use
<restartPolicy>
tag.
<run>
<restartPolicy>
<name>on-failure</name>
<retry>3</retry>
</restartPolicy>
...
</run>
<restartPolicy>
can is added inside of<run>
tag.- It has
<name>
and<retry>
children tag. <name>
take the type of policy you want to enforce.<retry>
how many times you want to restart your image, whenon-failure
policy is used.
Health check
- We can add a health check in our image as well.
- We use
<healthCheck>
tag, which has few child tags.
<build>
...
<healthCheck>
<interval>10s</interval>
<timeout>5s</timeout>
<startPeriod>10s</startPeriod>
<retries>3</retry>
<cmd>curl --fail http://localhost:8080/test/status-5xx || exit 1</cmd>
</healthCheck>
</build>
- We use
<healthCheck>
tag inside<build>
tag. - It has few children tags, which represents the option health check supports.
/test/status-5xx
will throw Internal Server Error after 6th request.- So for time being image will be healthy and after couple of minutes it will be tagged as unhealthy.
Multiple Container
Maven Plugin Running Multiple Container
- With fabric8/docker-maven-plugin we can run multiple images.
- Let's run
mysql
, but we won't run the app that talk with this database (for now).
<images>


</images>
- You just need to add another


</images>
- I am saving mysql data in
${project.basedir}/volume/mysql-plugin/
. - Start the container, attach pseudo-tty of myDb container and then insert some data to database.
- Stop and run the container again to see if data is saved.
Network
Maven Plugin Networks
For this chapter we will use Simple App With DB - from
plugin-db-initial
branch.
Auto Create Network
- When dealing with network, we may need to create a network.
- If we create a custom network, we need to create it manually.
- Plugin that we are using can be configured so that custom network can be created automatically.
- Just set
<docker.autoCreateCustomNetworks>
value to true and you will be all set.
<properties>
...
<docker.autoCreateCustomNetworks>true</docker.autoCreateCustomNetworks>
</properties>
Create a network
- To create a network we use
<network>
tag. <network>
has few children.mode
: The network mode, which can be one of the following values:name
: For mode container this is the container name, which is this image alias. For Mode custom this is the name of the custom network.alias
: One or more alias element can be provided which gives a way for a container to be discovered by alternate names by any other container within the scope of a particular network. This configuration only has effect for when the network mode is custom. More than one alias can be given by providing multiple entries.


-
To add your container in a network, you use
<network>
tag inside<run>
tag. -
<mode>
can accept multiple valuebridge
: Bridged mode with the default Docker bridge (default)host
: Share the Docker host network interfacescontainer
: Connect to the network of the specified container. The name of the container is taken from the<name>
element.custom
: Use a custom network, which must be created before by using docker network create.none
: No network will be setup.
-
In our case we are using
custom
, because we want to create our own network. -
<alias>
is where we give a way for a container to be discovered by alternate names by any other container within the scope of a particular network. -
Notice: we are using same network on both image configuration. But on mysql part we are giving alias.
-
If you build and run the app, and your app will just work.
mode container
- Using mode
container
you can access your network container usinglocalhost
. - But remember you have one big limitation, you cannot combine this with
<ports>
.
<images>


</images>
- We are still using
<network>
withmode
custom on our web app side. - On database side, we are using
mode
container but instead of network name we are using our web app name. - We are using container alias here but you could have used name as well.
dependsOn
- If you want to order which container start first you can use
depends_on
. - With the plugin we have
<dependsOn>
tag.


Understanding Kubernetes
Why do we need Kubernetes?
- We learned
docker
,docker-compose
. We know how to containerize our web app. Why do we need to learn yet another tool? - Note that
docker
anddocker-compose
runs on single host. - With help of Kubernetes we can run our app in multi-host environment.
- With Kubernetes, we can easily deploy and run your software components without having to know about the actual servers underneath.
- It doesn't matter if it has single host, or it has multiple-host or clusters.
- When deploying a multi-component application through Kubernetes, it selects a server for each component, deploys it.
- It also makes it easy to find and communicate with all the other components of your application.
- Kubernetes enables you to run your applications on thousands of computer nodes as if all those nodes were a single, enormous computer.
- It abstracts away the underlying infrastructure and, by doing so, simplifies development, deployment.
- Kubernetes can be thought of as an operating system for the cluster
Kubernetes Architecture As Simple As Possible
- Kubernetes' architecture in its simplest form has two most important component.
- Kubernetes Master
- Kubernetes Worker
Cloud
+----------------------+
| +----------------+ |
User+-----------------> | Kubernetes | |
| | Master | |
| +-------+--------+ |
| | |
| | |
| +-------v--------+ |
| | Kubernetes | |
| | Worker | |
| +----------------+ |
+----------------------+
Kubernetes Master
- As a user we always communicate to
Kubernetes Master
. We will usekubectl
tool to interact withKubernetes Master
. Kubernetes Master
(Control Plane) has a subcomponentsKubernetes API Server
, which you and the other Control Plane components communicate withScheduler
, which schedules your apps (assigns a worker node to each deployable component of your application)Controller Manager
, which performs cluster-level functions, such as replicating components, keeping track of worker nodes, handling node failures, and so onetcd
, a reliable distributed data store that persistently stores the cluster configuration.
+-------------------------------------+
| +--------------+ |
| | Scheduler | |
| +--------------+ |
| |
| +-----------+ +--------------+ |
| | API | | Controller | |
| | Server | | Manager | |
| +-----------+ +--------------+ |
| |
| +--------------+ |
| | etcd | |
| +--------------+ |
+-------------------------------------+
Kubernetes Worker
Kubernetes Worker
the run your containerized applications.- The task of running, monitoring, and providing services to your applications is done
Workers
. Kubernetes Worker
have different components.Container Runtime
: Docker, rkt, or another container runtime, which runs your containers.Kubelet
, which talks to theAPI server
and manages containers on its node.Kubernetes Service Proxy (kube-proxy)
, which load-balances network traffic between application components.
+------------------------------------------------+
| +------------+ +-----------------+ |
| | kublet | | kube-proxy | |
| +------------+ +-----------------+ |
| |
| +-----------------------+ |
| | Container Runtime | |
| +-----------------------+ |
+------------------------------------------------+
Nodes
- Within a cloud we can have lots of nodes, which may or may not be on same machine.
- It is a part of Kubernetes to abstract out these details.
- We always communicate with
Master
which may be on same node asWorker
or may be on different.
Cloud
+------------------------------------+
| +----------+ |
| | Worker | |
| | Node | |
| +----------+ |
| |
| +-----------+ +----------+ |
| | Master | | Worker | |
| | Node | | Node | |
| +-----------+ +----------+ |
| |
| +----------+ |
| | Worker | |
| | Node | |
| +----------+ |
+------------------------------------+
Pods
- Pods are the central, most important, concept in Kubernetes.
- Pod represents the basic building block in Kubernetes and a place where containers are run.
- When using Kubernetes you always deploy and operate on a pod of containers instead of deploying containers individually.
- Pods can run more than one container, but it is good practice running one container per pod.
- We need to remember that Kubernetes manages Pods not containers.
- So Kubernetes cannot scale individual containers, instead, it scales whole pods.
Points when running multiple container
- Pods with multiple containers, are always run on a single worker node — it never spans multiple worker nodes.
- A pod of containers allows you to run closely related processes together and provide them with (almost) the same environment as if they were all running in a single container, while keeping them somewhat isolated.
- One thing to stress here is that because containers in a pod run in the same Network namespace, they share the same IP address and port space.
- This means processes running in containers of the same pod need to take care not to bind to the same port numbers or they’ll run into port conflicts.
Pods Configuration
- I am using this kub-pod-initial branch for this chapter.
- We can create a Pods using command line but let's do it using
yaml
file.
apiVersion: v1 # 1
kind: Pod # 2
metadata: # 3
name: v1-web # 4
spec: # 5
containers: # 6
- image: amantuladhar/docker-kubernetes:v1-web # 7
name: v1-web # 8
ports: # 9
- containerPort: 8080
#1
apiVersion
conforms which version to Kubernetes API to use.#2
kind
describes what kind of Kubernetes Resources we want to define.#3
metadata
is a place where you put information about the pods#4
name
Setting name for pod.#5
spec
contains a pods specification. Specification like container template, volumes etc.#6
containers
list of container you want Pods to have#7
image
defines which image to use.#8
name
name of container.#9
ports
List of port container listens to.- Specifying ports in the pod definition is purely informational.
- Omitting them has no effect on whether clients can connect to the pod through the port.
- But it makes sense to define the ports explicitly so that everyone using your cluster can quickly see what ports each pod exposes.
Creating pods
- To create a pod we have a simple command.
kubectl create -f pods.xml
kubectl create -f pods.yaml
pod/v1-web created
List Pods
- To list the running pods, we can use
kubectl get pods
command.
➜ kubectl get pods
NAME READY STATUS RESTARTS AGE
v1-web 1/1 Running 0 21m
View logs
- Viewing logs is simple too
kubectl logs @pod_name
kubectl logs v1-web
- If your pod includes multiple containers, you have to explicitly specify the container name by including the
-c <container name>
.
Access your app
- We ran our Pod, if you check the status it states it is running as well.
- But how do we access our app?
- Note: Pods are not meant to be accessed directly. We create some higher labels resources that abstracts these detail.
- For debugging purposes, Kubernetes lets us access our Pods directly using port-forwarding technique.
kubectl port-forwarding @pod_name @host_port:@container_pod
➜ kubectl port-forward v1-web 9090:8080
Forwarding from 127.0.0.1:9090 -> 8080
Forwarding from [::1]:9090 -> 8080
- You need to keep this command running if you want to access your pod.
- If you visit
localhost:9090/test
, you will be able to access you app.
➜ http localhost:9090/test
HTTP/1.1 200
Content-Type: application/json;charset=UTF-8
Date: Sun, 17 Feb 2019 20:41:22 GMT
Transfer-Encoding: chunked
{
"app.version": "v1-web",
"host.address": "172.17.0.5",
"message": "Hello Fellas!!!"
}
Deleting the pods
- Deleting the pods is simple too.
kubectl delete pods @pod_name
➜ kubectl delete pods v1-web
pod "v1-web" deleted
Pod Labels
- Labels are simply a tag given to pods which may help us identify the specific pods.
- As our application grows number of pods increases, we need to categorize them so that we can manage them efficiently.
- Labels are a simple, yet incredibly powerful feature in Kubernetes.
- Labels can be added to pods, but also for all other resources Kubernetes has.
- A label is an arbitrary key-value pair you attach to a resource.
- It can be utilized when selecting resources using label selectors.
- Resources are filtered based on whether they include the label specified in the selector.
- A resource can have more than one label, as long as the keys of those labels are unique within that resource.
- You usually attach labels to resources when you create them, but you can also add additional labels or even modify the values of existing labels later without having to recreate the resource.
- Let's see how we can add them in
yaml
- Labels are added in
metadata
section usinglabels
property.
apiVersion: v1
kind: Pod
metadata:
name: v1-web
labels:
version: v1
env: dev
spec:
containers:
- image: amantuladhar/docker-kubernetes:v1-web
name: v1-web
ports:
- containerPort: 8080
- You can create a resource using
kubectl create -f pods.yaml
.
See Pods labels
- You can see pods labels in multiple ways
--show-labels
- If you want to see all labels use
--show-labels
option when usingkubectl get pods
. kubectl get pods --show-labels
.
➜ kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
v1-web 1/1 Running 0 2m38s env=dev,version=v1
using -L
option
- While
--show-lables
options works fine. - What if we have lots of labels. Not that I am saying you should add many labels.
- We can use
-L
options with the list of labels you want to see. -L label_key1,label_key2
➜ kubectl get pods -L version,env
NAME READY STATUS RESTARTS AGE VERSION ENV
v1-web 1/1 Running 0 6m2s v1 dev
Adding new label
- If you want to change/add the labels for pads you can do it by stopping the pod and creating again.
- But let's see how we can add new labels to running container
- We use
kubectl label @resource @resource_name @label_key=@label_value
- For us
@resource
is pods.
➜ kubectl label pods v1-web value=new
pod/v1-web labeled
- List the pods with labels and see the new value.
➜ kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
v1-web 1/1 Running 0 14m env=prod,value=new,version=v1
Updating labels
- If we want to override the existing label, we can use the same command like above but we need to append
--overwrite
option as well. - We use
kubectl label @resource @resource_name @label_key=@label_value --overwrite
- Let's change
env to dev
again.
➜ kubectl label pods v1-web env=dev --overwrite
pod/v1-web labeled
- List the labels
➜ kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
v1-web 1/1 Running 0 16m env=dev,value=new,version=v1
List the pods using labels
- Listing pods using a label selector.
- We use
-l
to add label criteria when listing the pods.
Selecting pods with specific label and specific value
➜ kubectl get pods -l env=dev
NAME READY STATUS RESTARTS AGE
v1-web 1/1 Running 0 51m
Selecting pods that have specific label
- We can also select pods that have specific label, ignoring value it has.
➜ kubectl get pods -l env
NAME READY STATUS RESTARTS AGE
v1-web 1/1 Running 0 53m
Selecting pods that don't have specific label
➜ kubectl get pods -l '!env'
No resources found.
Deleting pods using label selectors
- We already know how to delete a specific pod with name.
- If we want to mass delete pods, we can use label selector criteria
-l
option.
➜ kubectl delete pods -l env
pod "v1-web" deleted
Deleting all pods at once
- We can delete all pods at once using
kubectl delete pods --all
Deleting all resource
- We haven't learned about other type of resources, but if we want to delete all resources at once we can use.
kubectl delete all --all
.
Updating Kubernetes Resource
Using kubectl edit
You can also use kubectl edit rs @rs_name
and edit the yaml template
kubectl edit rs v1-web
Using kubectl patch
- We can use
kubectl patch
to update the template of running resource. - You can pass either
JSON
orYAML
when patching the resource. kubectl patch @resource_type @resource_name --patch @template_subset
k patch pods v1-web --patch '{"metadata": {"labels": {"version": "v2"}}}'
- In above command
--patch
takes a Kubernetes configuration file. - We can pass
yaml
like structure to--patch
but I don't think will look good on a command. Plus the spacing and tabs will be tricky. - We are using
JSON
to send the patch. - We define properties we want to override. Notice that it is a subset of a full Kubernetes config.
- When patching like this, we need to structure our data from root just like we do in config file.
Using kubectl apply
- To update Resource Template for running resource as well.
- To update the template for running resource you use
kubectl apply
. - Change you label
env to prod
at the moment.
➜ kubectl apply -f pods.yaml
pod/v1-web configured
- List the labels again and you will see labels are changed.
➜ kubectl get pods -L version,env
NAME READY STATUS RESTARTS AGE VERSION ENV
v1-web 1/1 Running 0 8m51s v1 prod
Liveness Probe
- We learned how to add health check and restart policies in docker.
- I also mentioned that Kubernetes as of now doesn't support these health check.
- It cannot read the status set by docker and take action upon it.
- For this reason, Kubernetes has concept of probes. Which is very simple but very powerful concept.
- We will look into
liveness
probes for now. - Kubernetes can check if a container is still alive through
liveness
probes. - You can specify a
liveness
probe for each container in the pod’s specification. - Kubernetes will periodically execute the probe and restart the container if the probe fails.
Types of liveness probe
- We can configure
liveness
probe for a container using one of the three method.-
HTTP GET probe
- This performs HTTP GET request on the container’s IP address, a port and path you specify.
- If the probe receives a response code of 2xx or 3xx, the probe is considered successful.
- If the server returns an error response code or if it doesn’t respond at all, the probe is considered a failure.
- Container whose
liveness
probe failed, will be restarted by Kubernetes.
-
TCP Socket probe
- This configuration tries to open a TCP connection to the specified port of the container.
- If the connection is established successfully, the probe is successful.
- Otherwise, the container is restarted.
-
Exec probe
- This executes an arbitrary command inside the container and checks the command’s exit status code.
- If the status code is 0, the probe is successful.
- All other codes are considered failures.
-
Adding Liveness Probe
- Let's see how we can add liveness probe in our app.
- We will use
httpGet
probe in our case.
apiVersion: v1
kind: Pod
metadata:
name: v1-web
labels:
version: v1
env: prod
spec:
containers:
- image: amantuladhar/docker-kubernetes:v1-web
name: v1-web
livenessProbe:
httpGet:
path: /test
port: 8080
ports:
- containerPort: 8080
Liveness Probe in Action
- Let's create a resource using above configuration.
- If you list the pods you will see our pod in running status.
NAME READY STATUS RESTARTS AGE VERSION ENV
v1-web 1/1 Running 0 32s v1 prod
- If you call
localhost:9090/test/exit-1
you will exit our app. - Kubernetes when they run
liveness
probe doesn't respond, will tag pod status toError
.
NAME READY STATUS RESTARTS AGE VERSION ENV
v1-web 0/1 Error 0 21s v1 prod
- Then it will restart your pod.
NAME READY STATUS RESTARTS AGE VERSION ENV
v1-web 1/1 Running 1 101s v1 prod
- Notice
RESTARTS
column. The value indicates how many times your app restarted.
Liveness Probe Options
initialDelaySeconds
: Number of seconds after the container has started before liveness probes are initiated.periodSeconds
: How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1.timeoutSeconds
: Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1.failureThreshold
: When a Pod starts and the probe fails, Kubernetes will try failureThreshold times before giving up. Giving up in case of liveness probe means restarting the Pod. Defaults to 3. Minimum value is 1.successThreshold
: (Mostly used in readiness probe) we will discuss this later.
livenessProbe:
httpGet:
path: /test/
port: 8080
initialDelaySeconds: 10
periodSeconds: 60
timeoutSeconds: 5
failureThreshold: 3
ReplicaSet
- We now know how to create pods, i.e. manually.
- But in real world we never want to create a pod manually.
- We always use higher level construct to manage pods.
- One of the higher level construct that we use to manage pods is
ReplicaSet
. (Although we will use one more higher level construct that managesReplicaSet
later). - A
ReplicaSet
ensures the pods it manages are always kept running. - If the pod disappears for any reason,
ReplicaSet
notices the missing pod and creates a replacement pod.
How does ReplicaSets
work?
ReplicaSet
constantly monitors the list of running pods.- Label selector is used to filter the pods that
ReplicaSet
manages. ReplicaSet
makes sure the actual number of pods it manages always matches the desired number.- If too few such pods are running, it creates new replicas from a pod template.
- If too many such pods are running, it removes the excess replicas.
Defining ReplicaSet
yaml
file
apiVersion: apps/v1 # 1
kind: ReplicaSet # 2
metadata: # 3
name: v1-web # 4
labels: # 5
version: v1 # 5
env: prod # 5
spec: # 6
replicas: 3 # 6
selector: # 8
matchLabels: # 9
version: v1 # 10
env: prod # 10
template:
# After this it just a Pod template we had before
metadata:
name: v1-web
labels: # 11
version: v1 # 12
env: prod # 12
spec:
containers:
- image: amantuladhar/docker-kubernetes:v1-web
name: v1-web
livenessProbe:
httpGet:
path: /test/
port: 8080
ports:
- containerPort: 8080
#1
:ReplicaSet
are defined underapiVersion
apps/v1.#2
: We are setting our resource type toReplicaSet
.#3
: Defines the metadata information for our resource.#4
: We are setting the name for ourReplicaSet
.#5
:labels
includes label added for aReplicaSet
.#6
:spec
includes the specification for our resource.#7
:replicas
tells ReplicaSet how many copies of pods we want. ReplicaSet make sure that specified number of pods are always running.#8
:selector
will have list of labels thatReplicaSet
will target. These are the labels that determines which pods are managed byReplicaSet
. This selector must somehow match labels defined in#12
,#13
.#9
:matchLabels
indicate that we are trying to value label key with label value.- We also have
matchExpression
which is more expressive. (More on this later).
- We also have
template
is the template for the pod our ReplicaSet manages. When starting the ReplicaSet or adding new Pods, it will use this template to start the Pod.- Notice: All properties below
template
is same as Podyaml
file we defined before.
- Notice: All properties below
Create ReplicaSet
- To create a
ReplicaSet
we simply usekubectl create -f replicaset.yaml
.
➜ kubectl create -f replicaset.yaml
replicaset.apps/v1-web created
List ReplicaSet
.
- To list the
ReplicaSet
we use same command like before but different resource. kubectl get replicaset
orkubectl get rs
.rs
is the short form forreplicaset
.
➜ kubectl get replicaset
NAME DESIRED CURRENT READY AGE
v1-web 3 3 3 42s
➜ kubectl get rs
NAME DESIRED CURRENT READY AGE
v1-web 3 3 3 44s
List the pods
- If you list the running pods, you will see you have three pods running.
➜ kubectl get po
NAME READY STATUS RESTARTS AGE
v1-web-8s5zw 1/1 Running 0 3m36s
v1-web-r9pkw 1/1 Running 0 3m36s
v1-web-xk9sk 1/1 Running 0 3m36s
po
is short form forpods
.
Scaling our App
Scale Up
-
If we ever want to scale up our application, it will be very easy now.
-
There are many ways you can scale your pods.
kubectl patch
kubectl apply
kubectl edit
kubectl scale
-
We already know how to use first three options from previous section.
-
Let's use fourth one
kubectl scale
kubectl scale rs --replicas=@number @rs_name
- Using this command you can quickly change the number of replicas you want to run.
➜ kubectl scale rs --replicas=10 v1-web
- List the pod again, you will see number of running pods will increase.
➜ kubectl get po
NAME READY STATUS RESTARTS AGE
v1-web-8s5zw 1/1 Running 0 6m48s
v1-web-fdnwf 1/1 Running 0 8s
v1-web-l6dbh 1/1 Running 0 8s
v1-web-lxkp7 1/1 Running 0 8s
v1-web-n6rfh 1/1 Running 0 8s
v1-web-q9zdq 1/1 Running 0 8s
v1-web-r9pkw 1/1 Running 0 6m48s
v1-web-s455s 1/1 Running 0 8s
v1-web-xctpn 1/1 Running 0 8s
v1-web-xk9sk 1/1 Running 0 6m48s
Scale Down
- Scaling down is super easy as well.
- Just update the replicas value. Set it to 3 for now.
➜ kubectl scale rs --replicas=3 v1-web
ReplicaSet matchExpression
- We saw how
matchLabels
works in previous example. - But
ReplicaSet
can have more expressive label selector than just matching key and value pairs.
selector:
matchExpressions:
- key: @label_key
operator: @operator_value
values:
- @label_value
- @label_value
- When we use
matchExpressions
we have few option that we can tweak. - We can have multiple
matchExpression
as well. key
will take alabel
key.operator
will take one of the pre-defined operatorsIn
: Label’s value must match one of the specified values.NotIn
: Label’s value must not match any of the specified values.Exists
: Pod must include a label with the specified key. When using this operator, you shouldn’t specify the values field.DoesNotExist
: Pod must not include a label with the specified key. The values property must not be specified.
values
: If we useIn
orNotIn
list of values to look for.- Let's generate the scenario where we can work on.
- First run the
ReplicaSet
yaml file you had before. - You will end up with three Pods. Override one of Pod label env=prod to env=dev
- The Pod whose label was change is now no longer managed by our ReplicaSet.
- Now ReplicaSet will create a new Pod to match the desired state.
- If you list the running Pod you will see we have 4 pods. (3 managed by ReplicaSet, 1 not)
➜ kubectl get pods -L version,env
NAME READY STATUS RESTARTS AGE VERSION ENV
v1-web-8s5zw 1/1 Running 0 31m v1 dev
v1-web-njbt9 1/1 Running 0 5s v1 prod
v1-web-r9pkw 1/1 Running 0 31m v1 prod
v1-web-xk9sk 1/1 Running 0 31m v1 prod
- How do we tell
ReplicaSet
to manage Pods with label env=dev and env=prod. - How to not create extra Pod?
Using Operator
- Let's update our
ReplicaSet
label selector to
selector:
matchExpressions:
- key: env
operator: In
values:
- dev
- prod
- Using
In
we can define that ReplicaSet can look for either valuedev
orprod
for labelenv
. - If we update one of the pod label env to dev now, it won't create new Pod. As updated Pod is still managed
by
ReplicaSet
and replicas count match the desired state.
➜ kubectl label po v1-web-6bcgx env=dev --overwrite
pod/v1-web-6bcgx labeled
- Now list the pods.
NAME READY STATUS RESTARTS AGE VERSION ENV
v1-web-6bcgx 1/1 Running 0 6m22s v1 dev
v1-web-mkk79 1/1 Running 0 6m22s v1 prod
v1-web-sxl59 1/1 Running 0 6m22s v1 prod
Service
- Now we know how to manage Pods using
ReplicaSet
. - You know that
ReplicaSet
make sure that desired number of Pods are always alive. - One very important thing we should remember is that Pods are disposable, they should be easily replaceable.
- Another thing we need to remember is that, there can be multiple Pods that serves the same content.
- How do we keep tract of Pods IP. Pods can scale up / down, how to know new IP of pods when they are added.
- If Pods is unhealthy, they are replaced. They may or may not get the same IP.
What is Service
- Service is a resource that makes a single, constant point of entry to a group of pods providing the same service.
- Each service is assigned a IP and port that never change while the service exists.
- Using service IP we can communicate with Pods that service manages.
- Service also manages Pods using label selector.
Service Definition
- To create a Service, you create a resource with
kind: Service
apiVersion: v1
kind: Service # 1
metadata:
name: v1-web-svc # 2
spec:
type: NodePort # 3
ports:
- port: 80 # 4
targetPort: 8080 # 5
nodePort: 30000 # 6
selector: # 7
env: prod
version: v1
#1
: We are defining the type of resource toService
.#3
: Type of Service. Default isClusterIP
but if we useClusterIP
it be visible only inside cluster.NodePort
will expose set of Pod it manages to external client.
#4
: Expose external endpoint to port 80. TypeNodePort
will ignore this properties.#5
:targetPort
is the port container serves the app.#6
:nodePort
will use this IP to expose the service to external client. It must be range from 30000-32767.#7
: This is the label selector that is used by Service to select the Pods.minikube
doesn't supportLoadBalancer
as of now.
Create a Service
- You can create a service by using
kubectl create -f svc.yaml
.
+----------------------------------+
+--------+ | +-----------+ |
|External+----------------->| Service | |
| Client | | +-----+-----+ |
+--------+ | | |
| +------------------------+ |
| | | | |
| +-v-+ +-v-+ +-v-+ |
| |Pod| |Pod| |Pod| |
| +---+ +---+ +---+ |
+----------------------------------+
List Service
- To list the services in cluster you use
kubectl get svc
, wheresvc
is short form for service.
Access the Pods externally
- Now that service is running, we can access our app from outside.
- If you try
localhost:30000/test
it will not work. Why? - Remember Kubernetes is running inside minikube cluster, which is running inside Virtual Machine.
- We can use
minikube service @service_name
to easily access your services through your browser.minikube ip
echos the minikube IP.minikube service @service_name --url
echos the IP with port instead of opening browser.
Readiness Probe
Why do we need readiness probe
?
- We learned how to use
ReplicaSet
, with that we also learned how to increase number of running Pods. - With
Service
we learned how to expose them, with single IP.Service
allows us to access our Pods without knowing it's actual IP. - But when does the
Service
starts to send a traffic to Pod when we scale up your app?
Understanding the problem.
- Create a
ReplicaSet
that runs only 1 copy of our app at the moment. - Run this command, this will call you app endpoint every second.
while true ; do curl http://192.168.99.106:30000/test ; echo "\\n" ; sleep 1 ; done;
- Instead of IP above, use your own IP.
- Now scale up your app. Run 3 replicas.
- You will start to get some failed request.
{"message":"Hello Fellas!!!","host.address":"172.17.0.5","app.version":"v1-web"}
curl: (7) Failed to connect to 192.168.99.106 port 30000: Connection refused
curl: (7) Failed to connect to 192.168.99.106 port 30000: Connection refused
curl: (7) Failed to connect to 192.168.99.106 port 30000: Connection refused
{"message":"Hello Fellas!!!","host.address":"172.17.0.5","app.version":"v1-web"}
{"message":"Hello Fellas!!!","host.address":"172.17.0.5","app.version":"v1-web"}
curl: (7) Failed to connect to 192.168.99.106 port 30000: Connection refused
curl: (7) Failed to connect to 192.168.99.106 port 30000: Connection refused
curl: (7) Failed to connect to 192.168.99.106 port 30000: Connection refused
{"message":"Hello Fellas!!!","host.address":"172.17.0.7","app.version":"v1-web"}
{"message":"Hello Fellas!!!","host.address":"172.17.0.7","app.version":"v1-web"}
{"message":"Hello Fellas!!!","host.address":"172.17.0.6","app.version":"v1-web"}
{"message":"Hello Fellas!!!","host.address":"172.17.0.6","app.version":"v1-web"}
{"message":"Hello Fellas!!!","host.address":"172.17.0.5","app.version":"v1-web"}
- Why is this happening?
- What happens right now is that,
Service
starts to send the traffic to a Pod as soon as Pod is started. - Kubernetes at the moment doesn't have any idea if our Pod is ready to accept connection.
- To help with this problem we use
readiness
probe.
What is readiness
probe?
- We learned the
liveness
probe before. - With help of
liveness
probe we were able to restart the Pods that were unhealthy. - With the help of
readiness
probe, Kubernetes will be able to know if Pods are ready to accept connection. - Like
liveness
,readiness
probe is also executed periodically. - The result determines if Pods is ready or not.
Types of readiness
probe
- Like
liveness
probe, we have three waysExec
HTTP GET
TCP Socket
Add readiness
probe
- To add a
readiness
probe, we can add it usingreadinessProbe
properties.
# ReplicaSet Definition
apiVersion: apps/v1
kind: ReplicaSet
metadata:
...
spec:
replicas: 3
...
spec:
containers:
- image: amantuladhar/docker-kubernetes:v1-web
name: v1-web-con
readinessProbe: # 1
httpGet: # 2
path: /test/ # 3
port: 8080 # 4
initialDelaySeconds: 10 # 5
...
#1
we usereadinessProbe
property to add probe in our Pod template.#2
we are usinghttpGet
way of defining probe#3
Which path to ping#4
Which port to expose#5
How many seconds do we delay before sending first probe request.
Readiness
probe options
- Like
liveness
probe we have few options we can configureinitialDelaySeconds
: Number of seconds after the container has started before liveness probes are initiated.periodSeconds
: How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1.timeoutSeconds
: Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1.failureThreshold
: When a Pod starts and the probe fails, Kubernetes will try failureThreshold times before giving up. Giving up in case of liveness probe means restarting the Pod. Defaults to 3. Minimum value is 1.successThreshold
: Minimum consecutive successes for the probe to be considered successful after having failed
readinessProbe:
httpGet:
path: /test/
port: 8080
initialDelaySeconds: 10
successThreshold: 1
periodSeconds: 60
timeoutSeconds: 5
failureThreshold: 3
Readiness
probe in action
- Create a
ReplicaSet
that runs only 1 copy of our app at the moment. - Run this command, this will call you app endpoint every second.
while true ; do curl http://192.168.99.106:30000/test ; echo "\\n" ; sleep 1 ; done;
- Instead of IP above, use your own IP.
- Now scale up your app. Run 3 replicas.
- You will only see response from other Pod when they are ready.
Creating Deployment
- We learned few type of Kubernetes Resource
- Service
- Pod
- ReplicaSet
- While Service is a higher-level concept we be creating in the future, Pods and ReplicaSet are considered lower-level concept.
Deployment
doesn't manage our app, but ratherDeployment
creates a ReplicaSet which in turns manages Pods.Deployment
is a way to define the states of our application declaratively.- With help of
Deployment
we can easily deploy our application, plus it is super easy to update them. Deployment
shines when we are about to upgrade the app to new version.- Out of the box it supports rolling updates, but other strategies are also easily achievable. We will discuss deployment strategies later.
+----------+ +----------+
|Deployment+--------->+ReplicaSet|
+----------+ +------+---+
|
|
+------------+-------------+
| | |
+-v-+ +-v-+ +-v-+
|Pod| |Pod| |Pod|
+---+ +---+ +---+
Creating Deployment
- Creating
Deployment
is same as creatingReplicaSet
- If you have
ReplicaSet
config file from before, you can easily make itDeployment
resource. - Actually we can even delete some properties. We will delete it later, and explain why we can delete them.
- For now let's replace the property
kind: Service
tokind: Deployment
. - And there you have it. You just created your
Deployment
resource.
apiVersion: apps/v1
kind: Deployment # kind: ReplicaSet
metadata:
name: web
labels:
version: v1
env: prod
spec:
replicas: 3
selector:
matchLabels:
env: prod
version: v1
template:
metadata:
name: web
labels:
version: v1
env: prod
spec:
containers:
- image: amantuladhar/docker-kubernetes:v1-web
name: web
# Liveness & readiness probe
ports:
- containerPort: 8080
- If you create a resource using
kubectrl create -f deployment.yaml
you will create your resources. - If you list the resources, you can see
Deployment
will createReplicaSet
,Pods
. - Number of Pods also match the
replicas: @number.
Deployment doesn't create Service
- If you listed the resources
kubectl get all
you may have noticed, we don't have our Service running. - To access Pods we need to create a
Service
.
Rolling out Update
Upgrade the app (V2
)
containers:
- image: amantuladhar/docker-kubernetes:v2-web
name: web
- Now let's create update start the update process
- You can use
kubectl apply -f deployment.yaml
- By default
Deployment
favors rolling update.
Because Deployment is not tied to specific version of app, we can delete the label version label selector from our yaml file.
Rolling Update Deployment Strategy
- Rolling Deployment is a process of replacing currently running instances of our application with newer ones.
- When using this deployment process, there will be a point of time when two version of our app will be running.
+-------+ +-------+
|Replica|---+->+ Pod_1 |
+----------->| Set_1 | | +-------+
| +-------+ |
| | +-------+
+----+-----+ +->+ Pod_1 |
|Deployment| | +-------+
+----------+ |
| +-------+
+->+ Pod_1 |
+-------+
- This is the initial state for our
v1
app. Because we said we needed 3 replicas. - After the update, Kubernetes will start you update with rolling update strategy.
How Rolling Update is done?
+-------+
+->+ Pod_1 |
| +-------+
|
+-------+ | +-------+
|Replica+----->+ Pod_1 |
+----------->+ Set_1 | | +-------+
| +-------+ |
| | +-------+
+----+-----+ +->+ Pod_1 |
|Deployment| +-------+
+----+-----+
| +-------+
| |Replica| +-------+
+----------->+ Set_2 +----->+ Pod_2 | |Running|!Ready|
+-------+ +-------+
- After we start update process, Kubernetes will create a second
ReplicaSet
. - It will also spin up one new
Pod
. This can be configured, which we will discuss later. - At the moment, there are 4 pods running but only three are in ready state.
Pod_2
was just created, and it takes time to be ready. - After
Pod_2
is ready, it will terminate one of thePod_1
and create a newPod_2
+-------+
+->+ Pod_1 |
| +-------+
|
+-------+ | +-------+
|Replica+----->+ Pod_1 |
+----------->+ Set_1 | | +-------+
| +-------+ |
| | +-------+
+----+-----+ +->+ |---| | |Terminating|
|Deployment| +-------+
+----+-----+
| +-------+
| |Replica| +-------+
+----------->+ Set_2 +---+->+ Pod_2 | |Running|Ready|
+-------+ | +-------+
|
| +-------+
+->+ Pod_2 | |Running|!Ready|
+-------+
- After new
Pod_2
is ready, it will terminate anotherPod_1
and then continue like this until newReplicaSet
desired state is not met.
When using Rolling Update strategy, user won't see any downtime. Also two version of the app will run during update process
+-------+
|Replica|
+----------->+ Set_1 |
| +-------+
|
+----+-----+ +-------+
|Deployment| +->+ Pod_2 |
+----+-----+ | +-------+
| +-------+ |
| |Replica| | +-------+
+----------->+ Set_2 +----->+ Pod_2 |
+-------+ | +-------+
|
| +-------+
+->+ Pod_2 |
+-------+
- This will be the final state of our update process.
- From the diagram, you can see that
Deployment
doesn't delete the oldReplicaSet
. - But having old
ReplicaSets
may not be ideal. - We can configure how many
ReplicaSet
to save usingrevisionHistoryLimit
property on the Deployment resource. - It defaults to two, so normally only the current and the previous revision are shown in the history and only the current and the previous ReplicaSet are preserved.
- Older ReplicaSets are deleted automatically.
Status of update process
- We have a
kubectl rollout
command which has lots of helper function that helps us to interact withDeployment
resource. - One of them is to see the status of
Deployment
update process. kubectl rollout status @resource_type @resource_name
kubectl rollout status deployment web
- This will log the phases Kubernetes is going through when updating our app.
Controlling Rollout Speed
- You can control the speed at which Pods are replaced using two Kubernetes properties.
- If you inspect the
Deployment
property you can see default value Kubernetes sets for these properties
➜ k describe deployments.apps web
Name: web
Namespace: default
Labels: env=prod
version=v1
Annotations: deployment.kubernetes.io/revision: 5
...
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
...
- You can see we have two property
maxUnavailable
- Specifies the maximum number of Pods that can be unavailable during the update process.
- The value can be an absolute number (for example, 5) or a percentage of desired Pods (for example, 10%).
- The absolute number is calculated from percentage by rounding down.
- The value cannot be 0 if
maxSurge
is 0. - The default value is 25%.
maxSurge
- Specifies the maximum number of Pods that can be created over the desired number of Pods.
- The value can be an absolute number (for example, 5) or a percentage of desired Pods (for example, 10%).
- The value cannot be 0 if
maxUnavailable
is 0. - The absolute number is calculated from the percentage by rounding up.
- The default value is 25%.
Setting the values
- All of these properties are defined under
spec.strategy.rollingUpdates
apiVersion: apps/v1
kind: Deployment
metadata:
# metadata
spec:
strategy:
type: RollingUpdate # Default
rollingUpdate:
maxSurge: 2
maxUnavailable: 1
# other props
Rolling Back Update
- From the previous section, you saw updating app with the new version was so easy.
- You just update the image you want to use and Kubernetes does most of the work for us.
- But what if the new version of app we just updated has some bug.
- What is bug turned out to be severe. We need to roll back the updates as fast as possible. We can't let user use unstable app.
- With Kubernetes, rolling back is just as easy.
Using kubectl rollout undo
to roll back update
- We can use
kubectl rollout undo @resource_type @resource_name
syntax to undo the update process. - It is very easy to do so.
History of Deployment
rollout
- You can easily see history revision history
kubectl rollout history @resource_type @resource_name
➜ kubectl rollout history deployment web
deployment.extensions/web
REVISION CHANGE-CAUSE
1 <none>
2 <none>
3 <none>
- If we do one little extra step, when creating / rolling out the update, we can see some values in
CHANGE-CAUSE
column. - If when creating / updating we use
--record
option. It will record the command we used to make an update. - The command is then visible in
CHANGE-CAUSE
column.
➜ kubectl rollout history deployment web
deployment.extensions/web
REVISION CHANGE-CAUSE
2 <none>
3 <none>
4 kubectl set image deployment web web=amantuladhar/docker-kubernetes:v1-web --record=true
Getting more detail on history
- You can see more detail of particular revision by using
--revision=@revision_number
option.
➜ kubectl rollout history deployment web --revision 4
deployment.extensions/web with revision #4
Pod Template:
Labels: env=prod
pod-template-hash=6445f5654d
version=v1
Annotations: kubernetes.io/change-cause: kubectl set image deployment web web=amantuladhar/docker-kubernetes:v1-web --record=true
Containers:
web:
Image: amantuladhar/docker-kubernetes:v1-web
Port: 8080/TCP
Host Port: 0/TCP
Liveness: http-get http://:8080/test/ delay=10s timeout=1s period=30s #success=1 #failure=3
Readiness: http-get http://:8080/test/ delay=10s timeout=1s period=10s #success=1 #failure=3
Environment: <none>
Mounts: <none>
Volumes: <none>
Rolling Back to specific revision
- We can control number of revision history to retain using
revisionHistoryLimit
property. - If there are multiple revision we can jump back to particular revision as well
kubectl rollout undo @resource_type @resource_name --to-revision=@revision_number
➜ kubectl rollout undo deployment web --to-revision=2
deployment.extensions/web rolled back
Failed Deployment
- When upgrading you application to new version, Kubernetes might not be able to deploy the latest version of your application.
- This can happen due to multiple reasons
- Insufficient resource
- Readiness Probe failures
- Image Pull Errors etc.
- Kubernetes can automatically detect if upgrade revision is bad
spec.minReadySeconds
spec.minReadySeconds
is an optional field that specifies the minimum number of seconds for which a newly created Pod should be ready.- Ready meaning any of its containers should not crash.
- This defaults to 0 i.e. the Pod will be considered available as soon as it is ready.
- If
Deployment
has only readiness probe, it will mark Pod as available as soon as itsreadiness
probe succeeds. - With
minReadySeconds
we can take ourreadiness
probe one step further. - If our
readiness
probe starts failing beforeminReadySeconds
, the rollout of new version will be blocked.
spec.progressDeadlineSeconds
- We can configure when Kubernetes marks Deployment as failed by property
spec.progressDeadlineSeconds
spec.progressDeadlineSeconds
is an optional field that specifies the number of seconds you want to wait for your Deployment to progress before the system reports back that the Deployment has failed.- If specified, this field needs to be greater than
spec.minReadySeconds
In the future, once automatic rollback will be implemented, the deployment controller will roll back a Deployment as soon as it observes such a condition.
Rollout v1 app
- Let's start with this
Deployment
file
apiVersion: apps/v1
kind: Deployment
metadata:
#metadata
spec:
minReadySeconds: 20
progressDeadlineSeconds: 30
strategy:
type: RollingUpdate # Default
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
# replicas, selector
template:
#metadata
spec:
containers:
- image: amantuladhar/docker-kubernetes:v1-web
name: web
readinessProbe:
httpGet:
path: /test/
port: 8080
initialDelaySeconds: 10
periodSeconds: 1
#liveness probe & ports
minReadySeconds
is set to 20 seconds.- That means even if our
readiness
probe succeeds before 20 seconds deployment is not considered successful. - Kubernetes will execute
readiness
probe multiple times withinminReadySecond
time frame. - If within that time period
readiness
probe fails deployment is considered unsuccessful. - Remember Pod Ready status will be updated by
readiness
probe butDeployment
has its own status to check if deployment process was success or failure.
- That means even if our
progressDeadlineSeconds
is set to 30 seconds.- Kubernetes will wait for 30 seconds before it sets the deployment process status.
maxSurge
is set to 1- Relative to
replicas
number, Kubernetes will create only 1 extra Pod.
- Relative to
maxUnavailable
is set to 0.- Kubernetes will always have desired
replicas
running.
- Kubernetes will always have desired
- Without setting
maxSurge
1 andmaxUnavailable
0, we are asking Kubernetes to create a new Pod and only delete existing Pod when new one is ready. readinessProbe
to check if our Pod is ready. We are running our probe every second.
Rollout v2
- To roll out v2, only thing we need to do is update the image the value.
- But for our test we will update
readiness
probe as well. - We will point
readiness
probe to/test/status-5xx
endpoint. - This endpoint will return status 200 for first 5 calls, but after that 500. (Trying to emulate situation where app run for a while and after some time it starts to crash).
# other props
spec:
# other props
template:
#other props
spec:
containers:
- image: amantuladhar/docker-kubernetes:v2-web
name: web
readinessProbe:
httpGet:
path: /test/status-5xx
port: 8080
initialDelaySeconds: 10
periodSeconds: 1
#other props
- Of course in real world we won't change
readiness
probe path, but for this test we are changing it.
Rollout Status
- After we apply our update, Kubernetes will start deployment process.
- It will create a new Pod with new version of our app.
NAME READY STATUS RESTARTS AGE
pod/web-5577ff7d67-fp6ht 0/1 Running 0 1m
pod/web-697456ff84-29tm8 1/1 Running 0 17m
pod/web-697456ff84-jhd2c 1/1 Running 0 17m
pod/web-697456ff84-vvtkd 1/1 Running 0 17m
- Initially it won't be Ready.
- But after your app starts for a brief period of moment, new Pod will have status Ready. (Remember for first 5 calls our endpoint works.)
NAME READY STATUS RESTARTS AGE
pod/web-5577ff7d67-fp6ht 1/1 Running 0 2m
pod/web-697456ff84-29tm8 1/1 Running 0 23m
pod/web-697456ff84-jhd2c 1/1 Running 0 23m
pod/web-697456ff84-vvtkd 1/1 Running 0 23m
- If we don't define
minReadySeconds
at this point, Kubernetes will terminate one of the older Pod and create a new one. - But because we did and time we defined on
minReadySeconds
has still not passed, readiness probes continues to check if probe is ready. - Because of the way we configured our readiness probe, our probe will fail before the
minReadySeconds
- After that Pod Ready status will be changed.
NAME READY STATUS RESTARTS AGE
pod/web-5577ff7d67-fp6ht 0/1 Running 0 5m
pod/web-697456ff84-29tm8 1/1 Running 0 24m
pod/web-697456ff84-jhd2c 1/1 Running 0 24m
pod/web-697456ff84-vvtkd 1/1 Running 0 24m
- After
progressDeadlineSeconds
is passed, Kubernetes will set the status of deployment. For our case it failed as Pod is not Ready. - If we were running
kubectl rollout status deployment web
we will see following log.
➜ k rollout status deployment web
Waiting for deployment "web" rollout to finish: 1 out of 3 new replicas have been updated...
Waiting for deployment "web" rollout to finish: 1 out of 3 new replicas have been updated...
Waiting for deployment "web" rollout to finish: 1 out of 3 new replicas have been updated...
Waiting for deployment "web" rollout to finish: 1 out of 3 new replicas have been updated...
error: deployment "web" exceeded its progress deadline
Recreate Deployment Strategy
- Recreate strategy is a simple approach of Deployment.
- If we use this strategy, Kubernetes will delete all existing version of Pods, only then it will create a new one.
- For a brief period of time our server will not be able to handle request.
Setting strategy
- Setting
Recreate
strategy in Kubernetes is easy. - Just set
spec.strategy.type
toRecreate
How Recreate strategy works
- When we deploy our first version, Kubernetes will make sure it matches our desired state.
+-------+
+->+ Pod_1 |
| +-------+
|
+-------+ | +-------+
|Replica+----->+ Pod_1 |
+----------->+ Set_1 | | +-------+
| +-------+ |
| | +-------+
+----+-----+ +->+ Pod_1 |
|Deployment| +-------+
+----------+
- When we start our upgrade process, Kubernetes will stop all our container.
- It will delete the Pods that are running old version of app.
+-------+
+->+ |-----| |Terminating|
| +-------+
|
+-------+ | +-------+
|Replica+----->+ |---| | |Terminating|
+----------->+ Set_1 | | +-------+
| +-------+ |
| | +-------+
+----+-----+ +->+ |-----| |Terminating|
|Deployment| +-------+
+----------+
- After all old Pods are deleted, Kubernetes will start a new Pod with out v2 app.
+-------+
|Replica|
+----------->+ Set_1 |
| +-------+
|
+----+-----+ +-------+
|Deployment| +->+ Pod_2 |
+----+-----+ | +-------+
| +-------+ |
| |Replica| | +-------+
+----------->+ Set_2 +----->+ Pod_2 |
+-------+ | +-------+
|
| +-------+
+->+ Pod_2 |
+-------+
- After this our deployment process is finished.
Note: Kubernetes creates a new ReplicaSet for this deployment strategy as well. This is so that we can easily roll back to previous version.
Blue Green Deployment
Blue Green Deployment
Kubernetes doesn't have (as of now)
strategy: BlueGreen
like we hadRollingUpdate
andRecreate
.
How blue-green deployment strategy works?
- If we want to upgrade the instances of V1 (Blue), we create exactly the same number of instances of V2 (Green) alongside V1 (Blue).
- Initially Service that we are using to expose our app will be pointing to V1 (Blue).
- After testing that the V2 (Green) meets all the requirements the traffic is switched from V1 (Blue) to V2 (Green).
- This technique reduces downtime by running two version of app at the same time.
- But only one version of app is live to user.
- For our example, Blue is currently live, and Green is idle.
- But after upgrade Green will be live, and Blue will be idle. (Later on Blue will be deleted)
Advantages
- With this strategy we can have instant roll back, as we still have old version of app running in background.
- Unlike Rolling Update we don't run two version of app once.
Disadvantages
- Blue Green deployment strategy might be expensive. When updating the app we use double resource.
Blue Green Stages
Initial State
+-------------+
| |
+-------+ Service |
| | |
| +-------------+
|
|
|BLUE| v
+---------------------------+
| +-------------------+ |
| |Pod_1||Pod_1||Pod_1| |
| --------------------+ |
| -------------+ +-------+ |
| |Deployment_1| |Replica| |
| -------------+ | Set_1 | |
| +-------+ |
+---------------------------+
- Initially we will have V1 app running, for now let's say Blue.
Rollout Update
+--------------+
| |
+-------+ Service |
| | |
| +--------------+
|
|
|BLUE| v |GREEN|
+---------------------------+ +---------------------------+
| +-------------------+ | | +-------------------+ |
| |Pod_1||Pod_1||Pod_1| | | |Pod_2||Pod_2||Pod_2| |
| --------------------+ | | --------------------+ |
| | | |
| | | |
| -------------+ +-------+ | | -------------+ +-------+ |
| |Deployment_1| |Replica| | | |Deployment_2| |Replica| |
| -------------+ | Set_1 | | | -------------+ | Set_2 | |
| +-------+ | | +-------+ |
+---------------------------+ +---------------------------+
- When rollout starts we create our V2 (Green), ideally we will create same number of Pods replicas.
- Notice, we are creating totally new
Deployment
here.
Switch to new version
+--------------+
| |
| Service +------+
| | |
+--------------+ |
|
|
|BLUE| |GREEN| v
+---------------------------+ +---------------------------+
| +-------------------+ | | +-------------------+ |
| |Pod_1||Pod_1||Pod_1| | | |Pod_2||Pod_2||Pod_2| |
| --------------------+ | | --------------------+ |
| | | |
| | | |
| -------------+ +-------+ | | -------------+ +-------+ |
| |Deployment_1| |Replica| | | |Deployment_2| |Replica| |
| -------------+ | Set_1 | | | -------------+ | Set_2 | |
| +-------+ | | +-------+ |
+---------------------------+ +---------------------------+
- To make our new version of app V2 Green live, we will have to point
Service
to newly created Pod. - For Kubernetes
Service
we just change the label selector.
Clean up or roll back
- At this stage, we can either cleanup resources or we can roll back to our previous version.
- If we think new version of app is stable, we can delete the old Pods / Deployments
- But if we need to do a roll back, it is very easy. Just change the Service label selector again.
Blue Green Deployment With Kubernetes
Deploy the V1 App
- Create a
Deployment
with following configuration
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-v1
labels:
version: v1
env: prod
spec:
replicas: 3
selector:
matchLabels:
env: prod
version: v1
template:
metadata:
name: web-v1
labels:
version: v1
env: prod
spec:
containers:
- image: amantuladhar/docker-kubernetes:v1-web
name: web-v1
#probes
- Notice how I am appending
-v1
to Deployment, ReplicaSet and Pods name. This is important because we don't want to replace the existingDeployment
resource.
Create a Service
apiVersion: v1
kind: Service
metadata:
name: web
spec:
type: NodePort
ports:
- port: 80
targetPort: 8080
nodePort: 30000
selector:
env: prod
version: v1
- Service at the moment is selecting Pods with
version: v1
label.
Update to V2 App
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-v2
labels:
version: v2
env: prod
spec:
replicas: 3
selector:
matchLabels:
env: prod
version: v2
template:
metadata:
name: web-v2
labels:
version: v2
env: prod
spec:
containers:
- image: amantuladhar/docker-kubernetes:v2-web
name: web-v2
#probes
- We create a new
Deployment
*-v2. - ReplicaSet / Pods labels are also suffixed with -v2
- At the moment the Pods created by our new Deployment is not live. Service is still pointing to old version of the app.
- After all the new version Pods are ready, we can switch the Service selector.
Change the Service
selector
- You can change the Service selector using multiple technique.
- I will use
kubectl patch
kubectl patch service web --patch '{"spec": {"selector": {"version": "v2"}}}'
- After this you will see all the traffic are now redirected to new version.
Cleanup / Rollback Update
- If you want to cleanup you can delete the old
Deployment
- If you want to Rollback the update, you can use above command and change selector to
v1
kubectl patch service web --patch '{"spec": {"selector": {"version": "v1"}}}'
Environment Variable
- Most of the apps that we develop nowadays depends on some kind of environment variables / configuration files.
- Using configuration files on containers are a bit tricky.
- If we push the configuration with container itself, we need to update / build the image every time we change the container.
- Often times some configuration properties are secret. We don't want some properties to be passed on carelessly.
- Containers normally use environment variables to get configuration.
- Kubernetes also supports passing environment variables to containers.
Kubernetes doesn't allow Pod level environment. Environment variables are set on container level.
- We cannot modify the environment variable for a container once it is set.
- We can dispose the current container and re-create a new one.
- If you think about the solution above, it make sense right!
- Containers are supposed to be immutable, if you change configuration of one container other replicas may not have same configuration.
- At least until you make changes to all of them.
Example
- Let's create a Deployment that sets environment variable to container.
apiVersion: apps/v1
kind: Deployment
metadata:
# metadata
spec:
# replica & matchLabel selector
template:
metadata:
# metadata
spec:
containers:
- image: amantuladhar/docker-kubernetes:environment #1
name: web
env: #2
- name: GREETING #3
value: Namaste #4
- name: NAME #3
value: Folks!! #4
# probe properties
#1
- We are using new image i.e docker-kubernetes:environment. I have modified the endpoint so that it changes the message based on environment variables.- GREETING - Default is Hello
- NAME - Default is Fellas!!
#2
- We useenv
properties to set the environment variable to container.#3
- Name of environment variable you want to set#4
- Value of environment variable you want to set
Calling Endpoint
➜ http $(minikube service web --url)/test
HTTP/1.1 200
Content-Type: application/json;charset=UTF-8
Date: Tue, 26 Feb 2019 05:47:37 GMT
Transfer-Encoding: chunked
{
"app.version": "v1-web",
"host.address": "172.17.0.7",
"message": "Namaste Folks!!"
}
Updating environment variable
- Note: To change the environment variable of container, either you need to restart the container
- You can update the environment variable easily with command below, but it will restart the container.
kubectl set env deployment web GREETING="Adios"
kubectl set env @resource @name @env_var=@env_value
- If you want to unset environment variable
kubectl set env deployment web GREETING-
- If you use environment variable and use dash
-
it will unset the environment variable.
Config Maps
- Setting environment variables to container template is one option to send configuration to container.
- But if configuration differ from environment to environment like DEV, QA, PROD then setting configuration value on
env
may not be a good idea. - Kubernetes has a resource called ConfigMaps which treats configuration as a separate object.
- So each environment can have same objects, but different values.
ConfigMaps YAML
- Let's create simple ConfigMaps that stores our GREETING and NAME environment variable value.
apiVersion: v1
kind: ConfigMap
metadata:
name: web
data:
GREETING: Namaste
NAME: Friends!!!
- As you can see creating ConfigMap is simple.
- You set
kind
asConfigMap
. - You set name of
ConfigMap
onmetadata
section. - On
data
root node, you add your properties values. - It is just a key value pair.
- Here we are setting GREETING as Namaste and NAME as Friends!!!
Creating ConfigMap
- Creating a
ConfigMap
is simple too, you just use your handykubectl create
command.
kubectl create -f configmap.yaml
Using ConfigMap with env
property
- If you want to get specific environment variable value using ConfigMap you can use valueFrom property instead of value property.
apiVersion: apps/v1
kind: Deployment
# metadata
spec:
# replicas & selector
template:
# metadata
spec:
containers:
- image: amantuladhar/docker-kubernetes:environment
name: web
env:
- name: GREETING
valueFrom:
configMapKeyRef:
name: web # Name of ConfigMaps
key: GREETING
- name: NAME
valueFrom:
configMapKeyRef:
name: web # Name of ConfigMaps
key: NAME
# probes & ports
- In above file, you can see we are using
valueFrom
property instead ofvalue
. - Using
configMapKeyRef
tells Kubernetes to search inConfigMap
. configMapKeyRef.name
states the name of theConfigMap
resource.configMapKeyRef.key
states which key to look for in defined resource.- Run the app using above configuration and call your endpoint
➜ http $(minikube service web --url)/test
HTTP/1.1 200
Content-Type: application/json;charset=UTF-8
Date: Wed, 27 Feb 2019 02:00:50 GMT
Transfer-Encoding: chunked
{
"app.version": "v1-web",
"host.address": "172.17.0.6",
"message": "Namaste Friends!!!"
}
- Now our message has Namaste Friends!!!.
- That's the value we have in our
ConfigMaps
Updating ConfigMap
Note: Updating ConfigMap doesn't update the value a running container has.
If you change value of ConfigMap, two similar container might be in inconsistent state.
- You can update the
ConfigMap
values usingkubectl patch
command.
kubectl patch configmaps web --patch '{"data": {"GREETING": "Yo"}}'
Import All values from ConfigMap
- While previous method gave us power to pull in value from ConfigMap.
- It was troublesome, if we have let's say 20 properties, we have to use
valueFrom
20 times. - Kubernetes has another property
envFrom
which allows you to import all values fromConfigMaps
apiVersion: apps/v1
kind: Deployment
# metadata
spec:
# replicas & selector
template:
# metadata:
spec:
containers:
- image: amantuladhar/docker-kubernetes:environment
name: web
envFrom:
- configMapRef:
name: web
# probe and port
- In above example, we are using
envFrom
instead ofenv
. - When we use
envFrom
property, we import all properties from that resource. (But there is limitation) - We can also use
prefix
keyword if you want to prefix your configuration properties with something.env.prefix
- If any of your
ConfigMap
property has dash-
. That won't be imported. - Because environment variable with dash
-
are invalid. - Kubernetes doesn't do any post-processing for you in this case.
There one more way to load the ConfigMaps i.e. using volumes
We don't know how to mount the volumes yet, so this will be skipped for now.
Secrets
- We learned how
ConfigMap
helps us separate our app template and properties. - While that combination works for almost all scenarios, Kubernetes has one more resources to save the properties i.e
Secrets
Secrets
as name suggest should be used to store properties that should be kept secret.- In other words, we should keep the properties in
Secret
Kubernetes resource. Secrets
are more secure and they reduces the risk of accidental exposure.
Secret
YAML
Secret
is not different than ConfigMaps.- Changing the
kind
fromConfigMap
toSecret
should do the work. (But there's a catch) - Let's update our previous
ConfigMap
resource file.
apiVersion: v1
kind: Secret
metadata:
name: web
data:
GREETING: Namaste
NAME: Friends!!!
- Let's create this secret and see what happens.
kubectl create -f secrets.yaml
- You will get an error message that says something like this.
Error from server (BadRequest): error when creating "secret-base64.yaml":
Secret in version "v1" cannot be handled as a Secret: v1.Secret:
Data: decode base64: illegal base64 data at input byte 4, error found in #10 byte of ...|:"Namaste","NAME":"F|..., bigger context ...|{"apiVersion":"v1","data":{"GREETING":"Namaste","NAME":"Friends!!!"},"kind":"Secret","metadata":{|...
- If you read the error message, you can see Kubernetes is expecting
base64
value for a configuration data properties. But it is a plain string.
Secrets
data are base64
- Let's convert our data to base64 and add it.
➜ echo "Namaste" | base64
TmFtYXN0ZQo=
➜ echo "Friends" | base64
RnJpZW5kcwo=
- Our new
Secrets
definition
apiVersion: v1
kind: Secret
metadata:
name: web
data:
GREETING: TmFtYXN0ZQo=
NAME: RnJpZW5kcwo=
- Now let's create a Secret again.
kubectl create -f secrets.yaml
➜ kubectl get secrets web -o yaml
apiVersion: v1
data:
GREETING: TmFtYXN0ZQo=
NAME: RnJpZW5kcwo=
kind: Secret
metadata:
creationTimestamp: "2019-03-02T22:53:07Z"
name: web
namespace: default
resourceVersion: "102112"
selfLink: /api/v1/namespaces/default/secrets/web
uid: f1cf2235-3d3d-11e9-aea6-025000000001
type: Opaque
Reading as env var one at a time
- Using Secret is easy as using
ConfigMaps
- Kubernetes when loads the
Secrets
values to Pods, it doesn't load encrypted value but actual value. - So not conversion is needed.
- We use
secretKeyRef
instead ofconfigMapKeyRef
.
env:
- name: GREETING
valueFrom:
secretKeyRef:
name: web # Name of Secret
key: GREETING
- name: NAME
valueFrom:
secretKeyRef:
name: web # Name of Secret
key: NAME
Reading all at once
- Like ConfigMap, instead of
configRef
instead ofsecretRef
.
envFrom:
- secretRef:
name: web
- And that's it, we can access the secrets as environment variable.
Secret
string value
- As you just saw, even if you want to add a string value to a Secret it's bit hard.
- You need to convert your value to base64.
- This useful when you load a file on a Secret. (We will get to this when we get to know volume)
- But Kubernetes one more property
stringData
, which let's you add your property value as plain text. - Only thing you need to remember is that
stringData
property is write only. You cannot read from it. You will know what I mean in a bit.
apiVersion: v1
kind: Secret
metadata:
name: web
stringData:
GREETING: Namaste
NAME: Friends!!!
- You can see using
stringData
we can add secret values as plain text. We can mix and match withdata
if you want to. - Create a Secrets and see YAML generated by Kubernetes
➜ kubectl get secrets web -o yaml
apiVersion: v1
data:
GREETING: TmFtYXN0ZQ==
NAME: RnJpZW5kcyEhIQ==
kind: Secret
metadata:
creationTimestamp: "2019-03-02T23:48:35Z"
name: web
namespace: default
resourceVersion: "105712"
selfLink: /api/v1/namespaces/default/secrets/web
uid: b18f158f-3d45-11e9-aea6-025000000001
type: Opaque
- Generated YAML doesn't have
stringData
. - That's why
stringData
is write only property. - We can write it as plain text in YAML file, but you cannot read it.
- When you load a file and create a Secret and Kubernetes will convert the value for you.
Secrets can be mounted as volume. Mostly useful when you load a file on Secrets.
Environment Variables may not be a safest way to expose the Secret values.
Host Path
- We learned how to deploy our Stateless app using Kubernetes.
- But we may need to deploy the app that has state, that accesses the database to save the records.
- Making a Stateful app run using Kubernetes is a bit tricky, solely because our app can run in different nodes.
- Kubernetes out of the box has different types of volumes, but based on cloud provider you use you will have way to save your data.
Kubernetes Host Path Volume
- Kubernetes has a simple volume type called hostPath which gives you persistence storage.
- But hostPath has one big limitation i.e. hostPath volume mounts a file or directory from the host node’s filesystem into your Pod.
- The main point here is that, hostPath volume type mounts the volumes to node.
- Our cluster can have multiple nodes, and two nodes doesn't share a filesystem.
- Pods with identical configuration may behave differently on different nodes due to different files on the nodes
- Because we are developing our app locally we have to use hostPath, and it doesn't affect us. We are running only one node.
Using Host Path
apiVersion: apps/v1
kind: Deployment
# metadata
spec:
#replicas & metadata
template:
# metadata
spec:
volumes: #1
- name: volume-name #2
hostPath: #3
path: /absolute/host/path #4
containers:
- image: amantuladhar/docker-kubernetes:v1-web
name: web-co
volumeMounts: #5
- name: volume-name #6
mountPath: /containerPath/ #7
#1
In above Deployment definition, we are usingvolumes
properties to define the volumes we want to use. (This is not a proper way to define volume, but let's go step by step)#2
We are giving name to our volume so that we can use it later.#3
hostPath
property. This is the type of volume we are using. For local development we are usinghostPath
but this will be provider specific volume type.#4
Child property ofhostPath
. This is an absolute path which points which folder from host to mount.- Remember for
minikube
host path is inside VM. - Here's a
minikube
documentation to mount your actual host path tominikube
VM, then to mount that volume to Kubernetes. - TLDR; use
minikube mount actual/host/path:/minikube/vm/path
- Remember for
#5
UsingvolumeMounts
property we can specify which volumes our container needs to mount.#6
Usingname
property we specify the name of the volume we want to mount.- For your purposes it needs to match
#2
- For your purposes it needs to match
#7
mountPath
take the path inside the container. This is the place where container will access external data.
We are using
Deployment
here. If you just want to create aPod
you remember, everything below fromtemplate
property is basically Pod definition.
- After you create a resource, we can go inside the Pod and play around with it.
- Let's create a file inside the container and see if that file is available on our host.
- And let's delete the Pod and see if new Pod can see that file.
kubectl create -f hostPath.yaml
- Creating file inside Pod
# Get inside Pod
➜ kubectl exec -it web-dep-89fd5db9-897dr sh
# Change directory to path where we mounted the volume
/myApp # cd /containerPath/
# Create a file with some text
/containerPath # echo "Very important data" >> file.txt
# Check if file exist (inside container)
/containerPath # ls
file.txt
- Check if file is visible in your host path. You will see the file if everything went well.
- Now, restart the container and see if file exist.
kubectl delete pod --all
- This will delete all the Pods.
- But Deployment will spawn new Pod immediately.
- Go inside new Pod and see if file exist.
# Getting inside new Pod
➜ kubectl exec -it web-dep-89fd5db9-689sw sh
# Checking if file exist
/myApp # ls /containerPath/
file.txt
# Checking the content of file
/myApp # cat /containerPath/file.txt
Very important data
- We learned how we can easily mount volume to Kubernetes by using hostPath.
- Make sure if you use hostPath on production you know what you are doing.
- There may be a scenario where we need this, but for general purposes like database this is not a solution.
- Next we will learn how we can improve this example and what problems this solution has.