Today there’s a huge buzz about dockers, especially about what are they for and what they do.

They are actually quite easy to grasp when explained well. They give an amazing set of features when understood well.

We’re gonna talk briefly about the history regarding portable environments, virtual machines and afterwards we’re going to see what a docker is step by step.

As the great Richard Feynman used to enstil is that when you understand something deeply, you can explain it in a way that a four year old would understand. Therefore we’re going to talk about docker inner parts in order to understand it well.

Inside this article I do assume you have any kind of experience regarding the use of Virtual Machines, so if you aren’t familiar with them, I do highly suggest you to read about them.

I hope you’re ready, let’s go …

History class 101

In the past when you wanted to make a machine which would be pre configured and ready when needed, you’ve used a virtual machine.

Virtual machine are all good and great but they are a guest machine on our host, which has its own os, which in the end speaks to our host os, through a hyper visor.

I know there’s a bare metal and os hypervisor. We will talk about the os hypervisor in this piece in order to simplify things.

With the OS we could create a virtual LAN network which the host acts as that virtual LAN router. We also have bridge which let’s the VM has a IP from the same network the host resides.

There more states which could be configured for the NIC, but we’re gonna focus on these ones.

Cloud and beyond

Regarding applications in the cloud or any other use case of VM’s, we used them as a pre built and configured environment so we could simply run it, and everthing works fine without doing any systematic or debugging work.

There’s a popular phrase which every developer had the opportunity to say and it’s “But it worked on my machine?!”.

How frustrating is that? I can really do relate because it happened to me too many times and I believe it happened to you as well.

Today many cloud providers and CI tools use virtual machines in order to deploy our application for customers use, but there’s a little downside to it all.

Virtual machine is a whole operating system that’s needs to be installed, and create a OVA from it, which doesn’t take too long(not really lol).

Because the use of a operating system of its own, the OS has its boot time and off course time consuming management to install it on the start.

The greatest minds wanted to solve those issues and create a community which helps to maintain OS’s in a fast and brief way, which is easy to understand and is using today standards.

From that the docker was born.

All hail Docker

Docker is a simple application which is called a Docker daemon, which is responsible to create containers from images, while each container is our application in a isolated environment for itself.

Quite landed a huge rock on you, but stay with me, everything will be cleared in a few minutes.

Like in our VM’s, docker has images of pre-built operating systems that can be ran. When the image has been ran, we actually created a container of that image, which basically tells that we created an instance of that image.

Let’s say for example, one day I woke up and wanted to take Ubuntu os, and decide to include a python script which prints an amazing hello Ido, just for the good feeling.

What we need to do in order to make that happen, we need to take an Ubuntu image, add our functionality regarding the amazing banner, and save it as a new image, so we could create a container out of it.

If we round things up we get:

  • Docker Daemon — Is the application that we actually operate with, and asks to do everything related to Dockers.
    Moreover, the entire communications between the docker containers themselves or to the operating system, is through this entity aswell.
  • Image — A saved state of an OS image. Everyone publishes their docker images to Docker Hub, which you can download from for your own use.
  • Container — In the most simple concept, its an image instance.
    Off course you can create multiple containers from a single instance.

Something cool about docker creators, they made the command line and all the methodology of handling dockers like using git, so if you know git, give yourself an applause for knowing most of the commands syntax already, or at least understand how to incorporate with them.

The next part is going to talk about User and kernel space, and also hypervisors, so if you aren’t familiar with them I highly suggest to read about them.

But VM’s and Docker still sounds the same!

Remember the part which VM’s has their own OS?
What I mean by OS, is they need to have a kernel code of their own, which gives the OS user space capabilities like networking, applications and more…

When we create a VM we have a hypervisor which is bridging between the Guest VM OS to the original Host OS. This cause off course a bottleneck due to the entities along the way. This is crucial when your guest OS needs to have the host kernel features in really high rates.

When you aren’t doing the scenario we just talked about, you still need to have your OS being installed, configured and being saved to .ova/.ovf files for future re-usability, which aren’t that easy to maintain.

Docker on the other hand gives you an application that imitates an OS that doesn’t needs the entire boot time of an actual OS, because it uses the Host OS.

Moreover, docker have made a public repository which is Docker HUB, where everyone like in Git repositories servers(GitHub, Bitbucket, GitLab, etc…) can save their images for everyone use, so basically docker came with a community to have pre made images for a lot of use cases. This allows for a faster development and research timelines in a lot of projects, which allows us to focus on our core values instead maintenance or other time consuming problems.

We want an example!

Let’s say we want an docker of ubuntu, we need to issue the following:

docker pull ubuntu (you can also write ubuntu:18.04)

With this we actually downloaded to the docker application on our local machine, a new image of Ubuntu (As you can see, you can specify the version of it).

Now we’re going to run the image, and actually have a container out of it.
It’s going to be a little complex, because I wish we see a very detailed and popular command, so we could understand end to end what it does

docker run -rm -dit --name MyUbuntuMachineName -p 80:80 -p 443:443 ubuntu:18.04
  • docker run — is a command to simply run a container from an image.
  • -rm — Says that when the container will stop, we wish to automatically remove it from cached containers. Containers are cached in order to not lose any temp/local data which is stored inside of it, so use this flag wisely.
  • -dit — This is a set of flags, which allows to execute the docker in the background and not getting inside of it. Execute it by yourself in order to understand deeply.
  • –name —Allows to tell Docker daemon what we want the name of the container to be.
  • -p — When any kind of data regarding networking is coming to the Host machine for example on port 80, but its actually ment for the docker, it wouldn’t reach it without telling the docker daemon “Hey docker daemon… when data from port 80 is being received on the host, send it to me on port 80 as well”. If you understood until this point what happened, it means you understand the -p 443:443 as well 🙂
  • The last value is actually the name of the image we want to run and its version.

After we ran the container we can see it’s details and all other containers by executing:

docker container ls -a

Conclusion

Today the entire tech industry is striving to change their hierarchies in on-premise and cloud to docker containers.
This move is good but sometimes when things go fast, which actually needs to be like so, can make others get confused what actually that step gave in the end result.

Therefore, I wanted to give a brief and detailed review about docker and the previous state that were before of it.
Off course, I’m not saying that VM is a bad solution, its good but you need to decide for your own use case what you need in terms of computation time, fast deployment, research, future hierarchies need of change and more…

As always, I do hope you enjoyed reading this piece, and if you have any suggestion to improve this article so everyone else will enjoy it and have a better experience, I’ll be very glad to hear 🙂

Thank you again and have a great day!

Author

I simply love learning and improving by what I do, and by doing so strive to achieve my goals which is also help others to achieve theirs.

Write A Comment