Docker – Part 3 – Run your first container
In this blog article, we will start with Docker architecture, docker objects and we will also run a simple docker container.
It is recommended to go through the previous blog articles in order for a better understanding.
https://myknowtech.com/tag/docker
Tip: Since copy/paste is not possible in this post, I documented the shell commands in the following git repository, you can easily copy/paste from there.
https://github.com/Prem1991/myknowpega/tree/master/docker
Docker architecture
The three main components are – Docker host, the Docker client and Docker repositories.
1. Docker host
The host machine where you can run the docker containers. It is the machine where the docker engine is installed.
It uses docker daemon to perform different tasks on container.
Docker daemon – dockerd. The persistent process that manages the containers. It listens to the Docker API call and manages different operations. It is like the heartbeat of the docker engine.
Please check the article to read more technical details about docker daemon –
https://docs.docker.com/engine/reference/commandline/dockerd/
In the previous post, we installed the docker desktop on the local machine. So here our Windows local machine is the docker host.
2. Docker Client
Users who want to access or control the docker containers can talk to the daemon by making API calls.
You can use the command line tools to issue the docker commands (each docker command in turn translates to the API call)
With Windows, you can use any command line tools like Powershell, which can act as Docker Client.
Quickly just open the PowerShell and execute the below command
docker –version
At the backend, an API call was made and it returned the installed docker version.
Important note: Docker client can be on the same docker host machine or it can be on any remote machine as well.
3. Docker registries
There are two types of registries
1. Public docker registry – Docker hub.
I hope this site is familiar to many. It is the hosted repository service provided by docker where you can find and share the docker images with the team.
https://hub.docker.com/search?q=&type=image
Here you can check out all the official and unofficial images.
Let’s take a little tour on the docker hub.
Explore tab –
You can explore all the available public images.
You see the official images from postgres, Ubuntu.
We will go through images later.
Repository tab
Here you can manage your own repositories. I already have an old repository last year during my docker learning. It is a public repository.
Using the create repository button, you can always create new repository for new application image.
Note: With free plan, you can have unlimited public repository and only one private repository.
You can navigate through docker hub to check out the other features 🙂
2. Private registry
Some of the organizations want to have their own docker registry to store the custom base images. These base images can be used by different applications in the organization.
There is one more term called repository in docker desktop. It can be either a local repository or a remote repository.
Local repository images are those that are already downloaded into the docker host machine where remote repository can refer to either the public registry like docker hub or organization’s own private registry.
Local repository
You see there are no local images.
Remote repositories
Since I connected my docker desktop to my personal docker hub account, you can see the old remote repository.
Now we are going to see the very first docker command.
I am going to pull the Ubuntu image from the remote repository – dockerhub.
Open powershell and execute the command – Docker pull Ubuntu.
You see the image gets pulled from the docker hub public repository and organised into different layers (3 layers).
Now switch to the local repository, you should see the pulled image.
Similarly you can also push images to the remote repository using docker push command.
Hope you got some understanding about repositories.
Docker Images
We saw that docker repositories hold the docker images. Every organization has their repositories and publish their official images.
So, What really is an Image?
We saw that containerising an application means, we package all its dependencies and libraries so that it can run in any environment. To package the stuff, we need some specification right? – Docker image does that.
What is a dockerfile?
– Each docker image comes with a Dockerfile. If Docker Image is a dish, then Dockerfile is the recipe.
– It is a text document that contains all the commands or instructions needed to assemble/build an image.
Let’s quickly check the Dockerfile for the Ubuntu image.
For all the official images, you can directly go to the Dockerhub and check the Image specification.
Switch too Docker hub and open the Ubuntu repository.
You see Ubuntu image already has 1B downloads 😀 with different tags.
Most of the images will have tags based on their versions.
The Latest Ubuntu version is 20.04 and so you see a tag for it. Maybe some users want to use older Ubuntu versions, in such case they can pull the image using tag name.
Docker pull <imagename>:<tagname>
If you leave it empty like we did before, it always pulls the latest tagged image.
Click on the latest tag link to check the dockerfile content.
It opens the git repository for the Ubuntu image.
As I told before, Dockerfile is written with a set of instructions/commands.
a) FROM instruction
Every Dockerfile should always start with FROM command they should specify the base image.
If you are creating a brand new image, then you can use the dummy placeholder image from Docker – scratch.
Then you see an ADD instruction
b) ADD instruction
Add instruction helps in copying files and directories from the source location (In this case, git repository) and add it to the destination location ( when any container runs using this image, destination location refers to the container file system). Usually, if you see other projects like NodeJS, you need to copy your source code into the container so that it executes the code and runs the web application.
In the Ubuntu dockerfile, they are copying a zip – ubuntu-focal-core-cloudimg-amd64-root.tar.gz
You can find the file under the source location.
There is another similar instruction – COPY. It is mainly used to copy the files and directories from the local source location, whereas ADD can be used when the source file is from a remote source location.
Add instruction is followed by 3 RUN instructions.
c) RUN instruction
RUN instruction can execute any sort of commands like create new directory, unzip or other commands.
d) CMD instruction
It usually specifies the default command that needs to be executed on running the container using the image. You can always override this at the time of Docker RUN.
There is one similar command – ENTRYPOINT – which is mostly used when to run a dedicated command.
You can check online to understand more about it
These are the 4 instructions used in the Ubuntu docker file.
The other main instructions are
e) ENV instruction
It is used to set the environment variable for the running environment. For example, you can specify the environment variable for JAVA_HOME or JENKINS_HOME etc.
There is a similar command ARG – which is used only for building the image. For example, you can set at ARG value and use it if different RUN instruction in the Dockerfile.
f) VOLUME instruction
It create a mount point to share or persist the directories.
For example in our current scenario – my Windows laptop is the docker host and I can run as many containers as I want. We know containers run in an isolated environment with their own file structure. So the files created by the container can get lost if the container shuts down. Here volume will be helpful to create the mount point.
g) EXPOSE instruction
This instruction informs that the container listens on the specified network port at runtime.
Note: At the time of running a container, you can also use –p for port mapping between the container and the docker host. We will see it in action soon.
There are a lot more other Docker instructions that we can use when building a docker image. I would recommend you all to go through the link to learn in-depth about each instruction – https://docs.docker.com/engine/reference/builder/
Okay, enough of theory.
Let’s run our very first docker container – Jenkins web application
Get started.
The main command that is used to run any container is
docker run <imagename> <options>
Step 1: Open the powershell and execute docker run Jenkins
Note: If you don’t specify the tag name, docker always fetches the latest image tag.
docker run Jenkins/Jenkins
Note: When you are pulling an image from the official repository, then you can just mention the image name. If you want to pull an image from an unofficial repository like mine, then you need to specify by dockerhubid/imagename.
You see different layers are build based on the order of instruction from the Dockerfile.
Next time when you run the image, it uses the cache layer unless some changes were done on the Dockerfile.
For Jenkins you need to complete setup wizard installation. Remember to note the secret.
From the docker desktop, you also see a running container. Don’t worry about the name. That is auto-generated funny name from docker.
Now try to access the localhost 8080
Why I cannot access the Jenkins????? –
This is because, no port was published to access from docker host – my laptop.
In the Jenkins Dockerfile if you see, we just used EXPOSE command to say the port used for Jenkins. It is applicable only with the container. But to access the container from an outside network, you need to publish or port map.
Step 2: Docker run Jenkins using port mapping
So when running a docker container – you can use the flag –p <dockerhost port>:<container port>
Since I am already using 8080 port for Pega, I am going to use 8090 for dockerhost post .
So my command will be
docker run -p 8090:8080 jenkins/Jenkins
Now you should be seeing the Jenkins setup wizard with localhost:8090
Complete the wizard.
Jenkins is ready now.
We have successfully started a docker container.
Just create some dummy Jenkins job and run it few times.
Now, let’s close the server. It means to stop the container.
How to stop the docker container?
Note: from the docker desktop, you can easily stop and delete the container, but we will do it via commands and learn in the process.
Step 1: Get the container ID or name.
docker ps
It says there are two containers running – one the old one on 8080 and the other on 8090.
Step 2: Stop the container.
docker stop <container id>
I stopped both the containers
When I check docker ps again, I should not see any running containers.
Though containers are stopped, it is not totally removed. It means it still occupies some resources in the docker host. If you really don’t need the container in the future, you can always remove it.
Step 3: Remove the container.
docker rm <container id>
In the docker desktop, it should not be there.
Tip: You can also use force flag to stop and remove the container in a single command = docker rm –f <containerid>
Okay, now let’s start the Jenkins app again by running the container.
Launch the localhost 8090
????? – Again we need to start from setting up wizard??? Why, what happened to the previous Jenkins job?!!!!!
Because the data or the volume from the container was never persisted in the docker host!!
So, how do we persist the container data?
Step 3: Docker run Jenkins using volume mount.
There are two types of mounting with docker containers.
a) Volume mount – Here you can create a volume object and mount your container data into the volume object. Docker manages the location and persists the data.
b) Bind mount – Mostly you need to specify the location, where the container data to be saved. You can use Bind mount to specify the directory location in the docker host.
We will quickly check both the options.
Volume mount
Step 1: Create a new Volume
docker volume create <volume name>
docker volume create Jenkins
Step 2: Get the right container directory where Jenkins data will be stored.
Mostly these data will be documented in the Jenkins Docker file.
Let’s learn a new command to inspect the docker image
docker image inspect jenkins/Jenkins
You see the main volume is – /var/Jenkins_home
You need to mount this into either volume mount or bind mount.
Step 3: Run the docker container using volume mount.
docker run –p 8090:8080 –v jenkins:/var/jenkins_home
There is error, because I didn’t remove the old container.
Let’s change the port to 8091 and run it again
Now again quickly setup your Jenkins and add a new job.
Now you either remove this container or just start a new container on different port, say 8092
docker run -p 8092:8080 -v jenkins:/var/jenkins_home jenkins/jenkins
The data is persisted and you will see no more setup wizard, instead you will be prompted with login and once logged in, you will see the persisted jobs.
Cool right.
Let’s do the bind mounting now, where we have the control to specify the location in docker host.
Bind mounting
Step 1: Create a new folder in your docker host – my laptop.
Step 2: Run a docker container by specifying the bind mount location
docker run -p 8093:8080 -v C:docker:/var/jenkins_home jenkins/Jenkins
My windows also gave a warning that I am sharing windows file with the WSL 2 docker container.
Launch the Jenkins in 8093 and finish the setup (since we are using new mount, no existing data)
As you are setting up, you will see all the Jenkins related data are persisted in the docker host.
So when you run another container specifying the same bind location, then of course it will use the persisted data 🙂 Try it on your own!
So we successfully downloaded the Jenkins image and ran a container by port mapping as well as volume mounting. These are some basics about docker.
I would recommend you to go through some docker official tutorials and few other readings on docker concepts.
Tip 1 – go through docker 101 tutorial for beginners – https://www.docker.com/101-tutorial
Tip 2 – to understand docker commands, use docker help. Just type docker, you will see the list of available commands.
Tip 3 – go through the Dockerfile reference to understand the instructions – https://docs.docker.com/engine/reference/builder/
Hope you guys understood some basics.
We missed one important command – Docker build, to build new images