In this docker series we’re going to understand what Docker actually is, what can we do with it, and deployments techniques that can be used along with it.

If you want, there’s a post I’ve done in the past about Docker but I think it wasn’t detailed as much as these series, but I do recommend checking it out for a starter.

Docker today is being used from the smallest and to the biggest scaling for many applications as of today, and I wished to make a series of it that will allow understanding of how to use it well, and the differences to other options we have today.

Today deployments

Today we usually use physical or virtual machines that are configured with our pre-built scripts and in the end running our compiled application.

The build time of those machines takes time.
It could take 1m, it could take 2m, and in some cases could even take 15m depending on the application. This as you understand, the longer the machine takes to start, the longer the client waits for his application to continue working as expected.

I wish that that was the only challenge. There’s also the challenge of debugging installation scripts why they take too much time, or even cases we put not the correct scripts and in the end leads to more downtime of our applications.

Overall, you understand that as the need for human involvement in a machine deployment is needed, the more error-prone we get. Of course, people who are a wizard in that area and can do deployments with their eyes closed, but we’re humans and we can do something wrong from time to time, so if we would have no human involvement in our deployment, the easier our life gets.😉

Moreover, in some cases, you need to use applications that are dedicated to specific configurations or setup, and that also limits us to run our application in specific environments, so in the end, it could also give us hard time debugging our application.

Some of you might say that we can make pre-built images of our machines and that will solve the human involvement errors, and you’re right. The downside for that is we probably need to make virtual machines on our local machines to check them out, and virtual machines are very sensitive from my own experience.

I remember times I’ve installed an Ubuntu server VM, and it crashed me several times until I succeeded in saving it to an image. Moreover, it takes time for virtual machines to run and they don’t have the management of a git-like system in Docker.

What is Docker?

Docker is a utility to configure namespaces in our operating system. In the world of containers, each namespace behaves as a container with its own unique features and configurations. You can think of a separate entity in your OS which has interaction with a variety of components with the original OS, and you can give it more permissions to access the host OS.

Other container implementations are doing basically the same thing, but of course, the implementation is different between each one, and maybe has more or less features into them.

Namespaces allow us to create logical separation in our OS and show/hide namespaces features from one another. It means that we can create a namespace that has access to file X, while another namespace doesn’t have access to it. We can give namespaces access to the host OS, access to the host NIC and be part of it, and many more…

But why?!

Today many application deployments consist of HTTP API’s servers and some aren’t.

If we only have an HTTP server, it’s probably a compiled application with a configuration file that we put inside the container with installing the relative runtime, configuring anything else we need and that’s it. Even with HTTP APIs things could get complicated. On cloud providers, we usually install applications that integrate with them, which is only an example but there are a lot more.

Each of those configurations/installations are being saved somewhere, and maybe have inner company guides on how to do them. So imagine that you don’t need to go around understanding how your application host machine works, try running it on your cloud provider and try to understand what goes on there.

Instead, we can download an image that consists of everything, run it locally, and debug it in more ease and faster way. Those images could be even used for local development reasons. Imagine when you want to build an automation test for your application, and you want to have your other applications to talk to, so the test could be as real as possible, wouldn’t that be great?

Hmmm… but you forgot that you have that application that uses many OS-specific configurations, specific applications which aren’t as simple as an HTTP server, and the challenges keep rising let’s say 🙂

Instead, you can download all the docker images, run them, and act as you’re working on your cloud provider with all your applications at hand, how cool is that?

What are Images?

In short, they are a saved container that has inside the entire configurations, let’s say his own file system, applications, and everything else you can think of an OS to have with the limitation of containers.

So think of taking a prepared image and in one command have everything you’ve already configured in the past — how cool is that?

Today we have on Docker Hub so many images that shorten our development time, so we can take an existing image for already created images and customize them for our need, or simply run them as they are. If you need to have a Centos, Ubuntu, Windows servers, or even a more specific use case like a message broker like NATS all configured up for us than believe me when I say it, DockerHub might already have that image ready for you.

Also, today many cloud computing platforms like AWS or Google(GCP) have their Containers registry services, so if you wish to save your own made images for your platform, you can save them there on your account.

But how do we use those images?

Well to understand what we can do with them and their basic commands let’s focus a little bit on some basic commands we’ve in our arsenal.

  • Showing our images(docker images) — This will show each docker image we have on our local machines. This means that for every image we download from DockerHub, we would be able to see them using this command.
  • Pulling an Image(docker pull imageName:imageVersion) — This command allows us to download an image from DockerHub by specifying the image name along with his version. If you go to DockerHub and choose any image, just for the test here is a NodeJS image, and if you look on the Tags tab, you would be able to see all the available versions to download.

You might be asking now:

  • How do we run those images?
  • What happens in the background when running those images?
  • So what is a container?
  • How can we create a new image of ours?
  • And more…

I’ve saved the best for last which is containers🥳

Container this & Container that

Containers are the bread and butter of Docker, and now we’re going to understand some of what we can do with them.

The container is actually an image that we’ve run.

But do we run it?

Simple, we can do one of the following:

  • Download an image using the docker pull command, and then execute it.
  • Execute an image directly with or without downloading it, so if the image doesn’t exist the docker pull would be executed in the background.

The command to run an image is:

docker run -dit node:alpine -n node-container

In the previous post of mine, there’s a detailed explanation of the run command, so I highly suggest you read the last part of it for this command details because I don’t want to make it longer if it’s not wished in the first place 🙃.

But still, in short, this command is telling Docker daemon to run a container named node-container, out of the image node with a tag of alpine, we separate the name from the tag/version using ‘:’.

Moreover, each container gets a randomly generated ID, which will be used for referencing it in any kind of command we wish to execute, but you can also access it by using the specified name you’ve given.

But then a question arises.
How can we execute something inside the container or simply get inside of it to make any changes we want.
Simple!

docker run -it node-container bash

What this does is telling Docker to execute the bash application inside the container we created before, and as a result, we have a bash terminal inside the running container.

Creating one of our own

There are two ways for that.
One is downloading an image, running a container out of it, and committing the changes to create a new image.

Another way is using something called Dockerfile, which is like a cooking recipe for Docker to know how to create an image.

We’re going to see the 1st option now and the 2nd later in part 3 of our series.

So… let’s say that your boss asked you to take a service you’re working on, ignore for now regarding which runtime, and make a Docker image to run a container out of it.

Do you need an Ubuntu-based image?
Great, let’s download an image of Ubuntu with a tag of the matching version we need using the docker pull command.

Create a container just using the docker run command and get inside using the docker exec command.

Download your application source code using git, install the runtime needed, compile the application if needed, and exit the container.

Now we’re going to see another new command for our toolbox.
The command is:

docker commit --author "Captain America" --message "Added application runtime and configuration" node-container my-repo/ubuntu-node-container:0.0.1

This way we’ve created an image in our local repository of Docker images called my-repo/ubuntu-node-container with the version of 0.0.1.

Now we can execute it in the following way:

docker run -dit my-repo/ubuntu-node-container:0.0.1

Why this time we had to specify the version?
Great question and the answer is simple.

For every image we manage and try to run/pull, we pull the tag value of “latest”, so if we want something more specific we specify the version or otherwise the latest tag would be running/fetched instead.

How can we execute our application once executing the container?
We can simply run our container and afterward execute a command inside of it.

Let’s say we want to execute a node application, so we can do the following:

docker exec -it my-repo/ubuntu-node-container:0.0.1 node server.js

Right now I have got an honest disclaimer for you.

The way of creating an image that we did right now is not a best practice, and could also lead to a lot of problems regarding the managing of images created like this, so I highly suggest you not create them this way.

So why did I showed you this you ask?

I wanted that us will understand the commands we have, understand the flow of creating an image, and in the end see a better way for creating an image so you will buy the product 😉 and use the Dockerfile instead.

More Commands for our Arsenal

docker ps => Shows a list of running containers
docker ps -a => Shows a list of running and stopped containers
docker images => Shows images in our local docker images repo.
docker container stop [name || ID] => Stopping a container
docker container rm [name || ID] => Remove a container
docker stop [container-id | container-name] => Stops a running container
docker start [container-id | container-name] => Restarts a stopped container

In Conclusion

If you’ve reached here it means you invested the time for reading this post and I highly appreciate you for that, I also hope you had a great time reading it and of course that it gave good value for you.

What we’ve seen is basically understanding the basics of Docker images and containers, and how to manage them.

We’ve also seen an overview that allows an understanding of why Docker is giving quite a toolbox for ease of development and deployments for our cloud applications.

We see how simple they are in terms of the commands and overall use, and we’ve only started to see the features it has, so hold tight and get ready for more exciting features.

If you wish to continue reading then check the next part of this series.

How to Work with Docker?
Photo by Kaique Rocha: https://www.pexels.com/photo/man-jumping-on-intermodal-container-379964/

If not then again I hope you had a great time reading this piece, and if you have any further questions I would be delighted to answer them.
Also, if you have any opinions or suggestions for improving this piece, I would like to hear 🙂

Thank you all for your time and I wish you a great journey!

Author

I simply love learning and improving by what I do, and by doing so strive to achieve my goals which is also help others to achieve theirs.

Write A Comment