In the previous posts, we’ve seen what are Images & Containers.

This post would discuss how can we connect to our containers, share files with it, make him share files with us, and also limit to read-only permissions.

As we discussed understood so far, Docker is an interface for managing namespaces in our OS.

In addition to local files and configurations, we can also configure network interfaces to the container and also allow files to be accessed from our OS and from the container itself, so it’s basically letting container mount sectors of our file system.

Networking of Containers

A container is usually closed up to the outside world regarding access using any kind of ports, so if we listen inside the container to port 4000, we wouldn’t be able to access it unless we allow it.

If for example, we wish to allow accessing it from outside, we need to add in the docker run a flag of the following:

-p 4000:4000 => Access to 4000 to the container routes to 4000 inside the container.

-p 10000-20000:10000-20000 => Acess to 10000-20000 ports to the container routed to 10000-20000 ports inside the container. 

But that’s only opening a port to be accessed outside the container and be mapped inside the container.

But how does the networking itself, regarding the OS, works with the containers themselves? Well, the traffic is being routed from the OS networking stack directly to the container namespace, which adds an overhead regarding the transmission of the data itself, but of course like with Virtual Machines, we have options for defining the types of networking drivers we want to use.

We actually have several types of networking we configure for our containers, so we’re going to see some of the basic types out there and understand the differences.

Bridge

This mode is the default mode for creating containers.

When using bridge mode, we’re actually using a virtual NIC which is usually called docker0 if you wish to see it with the ifconfig command.

The bridge mode is making an isolated network with the network address of 172.17.x.x/14, and each container gets an IP of its own. This allows for the containers to communicate between themselves but the outside world, which is basically our OS.

If we do wish to communicate with the outside world, we simply need to publish ports and that’s it. But something important to understand is that you cannot publish the same port for multiple containers, because only one can listen to that port regarding the OS.

Host

This option is quite powerful but should be taken with careful care. This option allows us to let a container be part of the NIC of the OS, and as a result, everything the OS receives, the container can receive without doing publish flags, and also two host containers cannot listen to the same port like the way two applications cannot listen to the same port.

Even when you would be adding publish flags, with the host network driver, docker will tell you that the port publishing is not needed, in order to simplify the command itself.

But why use it you ask?
When you do a port mapping it gives an overhead regarding the networking part, so the host mode allows you to boost the performance regarding that because there are a lot of use cases that need a wide range of ports.

Also, if you would run a container in host mode on a Linux machine, do the “ps aux” command and you would see the applications running inside the container on the output.

An important thing that sometimes gets missed, and I also missed it for the first time, when you are running a container on macOS or Windows hosts then the host network driver wouldn’t work because it’s not supported for them yet.

Overlay

Sometimes it’s wished to use multiple machines that have Docker, and connect them all together as a single network. I haven’t experienced this option too much, but I think the overall idea is well explained 😉.

None

This option means that there’s no network defined for the container and as a result, it cannot be accessed using any kind of port or accessed by another container.

This is a great option for testing our application which acts on its own like CPU bounded applications. Also, can be a good option for sandboxes for security reasons, but moreover than that, I haven’t heard of any more cases for using it until now.

Example

But how can we create a network and use it?

It’s quite simple:

docker network create -d overlay/host/bridge my-network

docker run --network [none | bridge | host | self-created-network | containerID or name]

or

docker run --net=[none | bridge | host | self-created-network | containerID or name]

The bridge mode is the default one so there’s no need to specify bridge in either when creating a networking driver or specifying the network flag for running a container.

If we wish to use none or host driver than there’s no need to create a network driver with the docker network create command, only if you need to organize it in names or some kind of hierarchy, but I’ve never heard of order that is making a mess so it’s kind of welcome😉.

We can, as you have already seen, allow a container to have a network stack configured to another container by specifying the container id or his name.

In the world of Docker networking, there are a lot more subjects to read or cover, so if you’re intrigued I highly suggest you read the documentation of each network driver which will take you to other places to read about.

Happy reading!🧐

Volumes & Shared data

If you’re using directories mounts on Virtual Machines then this chapter would be very familiar for you, and if you aren’t then you’re in luck 🙃.

When running a container we can specify the following flag:

docker run -dit -v /home/ubuntu/Desktop/my-volume-folder:/container-folder node:alpine

This allows for mounting a folder from our host machine to a folder inside our container. If the folders don’t exist, don’t worry Docker will create them for you.

Why do we need volumes you ask?

It could be from a variety of reasons actually:

  • We want to pass data to containers for use.
  • Want the host to have access to data inside a container.
  • Want to have log files from the container even when they’ve removed.
  • More…

Another cool feature is making a volume be read-only by the container, and this is a good option if there’s shared data across multiple containers for the same volume.

In order to do that, we simply need to specify the following:

docker run -dit -v /home/ubuntu/Desktop/my-volume-folder:/container-folder:ro node:alpine

In Conclusion

We’ve now seen the options we have for network drivers for our containers and hope we understood them well 😇.

We’ve also seen a cool tool which is volumes and believe me, there’s a lot more about them and the network drivers, but we did manage to see some of the main features and understand their concepts.

If you wish to continue reading then check out the next part of this series.

How to Work with Docker?
Photo by Kaique Rocha: https://www.pexels.com/photo/man-jumping-on-intermodal-container-379964/

If not then again I hope you had a great time reading this piece, and if you have any further questions I would be delighted to answer them.
Also, if you have any opinions or suggestions for improving this piece, I would like to hear 🙂

Thank you all for your time and I wish you a great journey!

Author

I simply love learning and improving by what I do, and by doing so strive to achieve my goals which is also help others to achieve theirs.

Write A Comment