Back to top

Docker

Containers, containers – up to the horizon. Not so long ago (a few years ago) I enjoyed the possibilities and lightness of LXC (I was writing about it - unfortunately only in Polish) and honestly, in my discussions with colleagues I rather ignored Docker. Well, the LXC is quite a nice tool, what more could I want?... But of course I could! Although I am personally unwilling to agree with the statement that either Docker or LXC (which, according to me, depends mostly on the type of task), in the developer’s work Docker seems to be more successful – LXC is, well, "too heavy". Here it is worth mentioning that in case of Docker the difference is mostly in the "attitude". Everything that I came across before Docker (VMWare, ESXi, Xen, Hyper-V, VirtualPC, Qemu, Bochs, VirtualBox and of course LXC) – is some sort of a virtualization (paravirtualization or containerization) of the WHOLE system. Docker focuses on the application without taking care of those parts of the system which the application itself doesn't take care of. Speaking the language of the Linux gurus: the app is started with the process identifier (PID) equal to "1" (normally reserved for the "Init" process, however in the docker container there is no Init process nor any counterpart) – does it change anything? It seems that it changes a lot!

Concept

At the beginning I should warn you that the comparison of the LXC and Docker projects will occur in this article many times – it results partly from my own experience (I know LXC, I have been working as an administrator for some time and I understand the attitude presented by LXC perfectly), and partly because it is simply a great material for comparison – the both projects use quite the same technology (even if Docker has already resigned from being based on LXC and for some time it has been using its own library - libcontainer) as an “execution environment”. For those who do not know LXC – let’s assume that LXC is the “standard” attitude – so the container like a virtual machine, it should be enough (I will not get into technical specificity).

Let’s go back now to the process identifier (PID) – as I have mentioned in the intro, Docker starts the indicated application with this identifier which equals to “1”. It seems that it may sometimes be problematic. The problem may be the application itself (example) as well as it may result from the way the process of PID=1 is treated in Linux (I mean only the GNU/Linux system core) – the more detailed description of this problem and the proposed solution can be found on the website of the dumb-init project. Those problems do not seem to be some sort of technical issue. So technically, despite the fact that there may be some friction, there are no bigger problems (and it may only be better in future!). However conceptually the case (I mean the process of PID = 1) is a bit more complex. Although I have never analyzed the Init process in details (those who are interested can find a lot of information in the internet), I believe that this process creates many useful functions. Out of curiosity I decided to check how the processes in the Debian GNU/Linux system looks like when at the start of the system we will skip starting the Init process moving directly to the system shell. It is a simple trick which can be very useful in case of forgetting the root password because in the system started in such way we have the root authorization without providing the password – in the end we start the system shell directly! The standard procedure implies the start-up of the Init process, then we start (through the Init process) the getty process (resulting in writing the "login prompt"), after writing the user’s name the password process is started (which receives the password and authorizes the user) and finally, after writing the password and its correct verification (through the password process) the shell process is started (e.g. /bin/bash) – a long way! By providing the correct parameters to the kernel at the start of the system we may skip this whole sequence starting the shell process at once – in this case: /bin/bash (it should work if Linux (the system core) was not compiled with the blocking options of such a startup – as a default kernel in Debian Jessie).

How does the process tree look like in the system started in that way? Well, the "bash" process is started of course from PID = 1! (However in the live system the list of processes will not be limited to only our shell process because we will have quite a sizeable process tree – in my case +/- 60 coming from kthreadd started from PID = 2 – but these are "technical specifications"). The effect – on one hand it is substituting Init with the shell process which will let us do what we want with the administrator’s authorization, on the other hand – our system, at least at the beginning, is almost completely disused. In the system started in a standard way, we have about 180 running processes at once (including of course this +/- 60 kthreadd subprocesses) – and although it does not seem to me that all those processes are necessary, the part of them is quite useful which makes our system work the way we want it to (it concerns mostly the network configuration, mounting the external drives, etc…).

Docker starts in the container system responsible mostly for the first case – so when we start the GNU/Linux system with the "bash" process (or some other application) in place of Init. LXC represents the second attitude.

It is time for the first one of the Docker rules: one container – one application.

To sum up: we do not play with Init and everything connected to it (if we need the containers with the Init process, we use LXC) – the Docker container is by rule a subject to the particular application, even if it contains the whole file system of GNU/Linux system. Do not let this "whole" file system terrify any of us – thanks to using the "layers", the cost of the storage space is strongly minimized. LXC requires the full file system per one container, so for about 10 containers with the system of 600 MB of storage space we need about 6 GB of space, Docker uses the mechanism of layers letting "sharing" the basic file system by all the containers limiting the cost for one container only to the differential records which let those 10 containers start at using less than 1 GB of space – it is a really nice feature (I will go back to these layers later).

Of course, this "one application" is not a requirement or limitation, but only a rule – it means that if we want to, we can start more processes inside the Docker container or we can even start the Init process (however that does not seem sensible – I will discuss it later). One of the possibilities, in case we need starting more than one application in the Docker container, is using the supervisord application.

The last thing from the issues concerning the "concept" – we notice that the LXC container may be started without the managing process (LXD) – the order "lxc start -n CONTAINER" does not require the existence of the LXD process in the host system – the LXC containers are "self-sufficient". The Docker container will not be started without the presence of the dockerd process in the system (some time ago, the LXD also supported the Docker containers but I do not know if it still works). Right now I am not analyzing the technical reason of this solutions assuming that it is one of the consequences of deviation from the Init process – however it is a big simplification. The startup of the Init process in the Docker container may cause some of the functions to double by the "assisting" dockerd which in consequence may lead to some problems. But it is only a theory – I did not try those tricks on my own and I am not sure if this would be the only one and serious problem. For the time being I stick to the rule: one container – one application (however I use also the supervisord).

Building

We already know the concept, it is time to get to know the way to build an environment. In the previous article (available, at least for the time being, only in Polish) I described the way in which the environment building looks like in case of the duet Vagrant+VirtualBox – in case of Docker, the attitude is different, mainly because of the rule "one container – one application". When building the environment using Docker we may try the major isolation of processes! We can do that thanks to the technical possibilities of the solution, the cost of the building of the isolated environment (container) is much lower than the cost of building the virtual machine VirtualBox (how many VB could we start simultaneously in our local system?), well, it is even lower than building the container using LXC (e.g. because of the greater need of the LXC for the storage space). Unfortunately, although it is inevitable, also the tool we must use is different. However, before I will go into details I am going to mention shortly the possibility of starting Docker in VirtualBox – it can even be reasonable, e.g. at the local testing of the Docker Swarm system.

Tools. Ansible (and other similar tools) may be useful in case of the duet Vagrant+VirtualBox and in case of LXC (or any other virtualization/containerization system which enables access through ssh). Unfortunately in case of Docker this does not work – here the ssh is a different story (sshd is a different process after all!). We must focus therefore on the dedicated solutions, such as the docker-compose.

Our developer environment is mostly a set of cooperative processes (the data base, http Server, some caches, etc…) – how to containerize this whole mess? Of course each process must be placed in a different container. The number of containers needed may then increase to a large number. In this article I will not base on a particular example but soon I am going to try to share my experience connected with transfer of one of my private projects from two VirtualBox machines for five docker containers(so far). Now I want to draw your attention to the trend – in case of Docker we do not build one machine being our developer environment – we build an environment of cooperative components, even in case of simple applications (e.g. the application that needs apache+php+mysql would be already two containers – one for apache+php and one for mysql – and not one virtual machine). With each next step, adding another application, this small difference will transform into a great gap – but that’s ok because it gets us closer to the recently popular IT trends– the microservices!

Therefore, we can say that the application in the Docker architecture is the whole team of cooperative components that constitute a small computer network in our local machine – because each container will need an individual ip address, this individual address must be taken from some set of addresses, it must indicate the network, which must have its own gateway, dns servers, etc… First of all, we will place all the containers in one network, but really nothing stands in our way to create the most varied network environment (if we want e.g. to stimulate the BGP routers work). Such a clearer isolation of particular elements has some challenges, mostly because it demands more exact indication of responsibility and dependency. Sometimes it may be problematic, however in the end it is very convenient – changing each element (on the container level) is the simplest operation!

I have already written that in case of Docker, the issue of the network does not concern one ip address any more as in the case of e.g. VirtualBox (I generalize of course). Here we must look at it more globally and set a particular range of ip addresses which we will use. The network in Docker is an important theme then.

Currently, Docker supports two attitudes concerning networks:

  • Legacy – predefined networks of bridge, none and host – they can be helpful in tests and ad-hoc solutions all the time, in other cases they should be avoided.
  • Userdefined networks – networks defined by the user – they should be our main tool!

I think that I will go back to the topic of the networks in Docker in some other aticle, now I recommend learning about the documentation. In case of the implementation of Docker in the OS X system, the case becomes more complex because of the specifics of this implementation and the work (of the used) Hyperkit. Here I’m going to tell you only that my way to the "free" access to the containers on OS X is using the container from OpenVPN (I’m going to write something more about the differences between Docker on GNU/Linux and OS X at the end, in the summary).

When we have already prepared the network surface, it is time for building containers! Generally speaking, building a container is a two-phase action:

  1. Building / downloading image.
  2. Configuration and starting the container.

The base of our containers are "images". The "images" can be downloaded e.g. from the service dockerhub.com, the local repository, or we can build them on our own. What is an image? It is simply the system of container files, but some aspects of the system work are "placed" only at the moment of starting the container (e.g. network configuration), that is why we should rather not modify the files connected to those functions while building our "image", we should focus on the things important from the point of view of the application which is to be started in a given container. The image built using the Dockerfile is always built on the base of some other image. So by building the system of files for our application we must always choose the base system, then in the configuration file of the image (Dockerfile) we define the modification that we want to add to our base image in order to get the desired effect. Those modifications include, for example:

  1. Copying files into the file system of the container.
  2. Installing packages.
  3. Introducing changes in the configuration files.

It may seem that it is enough to solve any problem, however sooner or later we will notice some… inconvenience. All of these operations are connected only to the file system, we cannot then, for example, create user accounts or reconstruct the database from the backup using the above mechanism during preparing an image for the container with the MySQL server, I mean that the commands:

RUN mysqladmin -u root -proot

or

RUN mysql -u root –proot < create_my_db.sql

won’t work because… the MySQL service has not been started yet! It will be started only at the moment of starting a container, however it will happen only after building an image – well, do we swallow our own tail? It may be a little problematic but obviously we are not powerless in those situations – there are many ways to manage it, however they are concentrated on the actions to be taken at the moment of starting a container and not during building its image. It is worth remembering about it!

Each "image" is built based on another image and the number of the indirect images is optional (at least theoretically). It means that not every image must be built thinking about a particular container. Some of the images may be built to isolate some operations. The dependency between the Docker images is similar to the mechanism of heredity in the object programming un/fortunatelly, the multiple heredity is not possible – that means that a given container may have only one basic container.

The images are connected also to another interesting functionality used in Docker – the layers. Each image is built out of layers which are created automatically while performing commands from the Dockerfile.

The layers may be "previewed" using the command:

$ docker history IMAGE

They may be also integrated reducing their number in more complex images. The layers is the idea which is very… interesting. The service of the layered system of files exists in Linux (the core of the GNU/Linux system) for a long time now, however it was never displayed in such a way to the user (at the moment I cannot think of any project that would use this mechanism). The idea is simple – each upper layer includes only the differences (in the area of the file system) in relation to the lower layer. The layers are the "third dimension" of configuration (well, I must think through this comparison), by organizing them properly we can fast-forward the building of the images. However we must apply to one rule that if we modify one layer, then outside this layer we must also "rebuild" the other upper layers. The layers are a bit like the mechanisms known from the graphics software where the background of the image can be on the lowest layer and on the upper ones we put some details - it generally works that way.

The last step in creating images is performing the following command (notice the dot at the end of the command – it indicates that the Dockerfile is in the current directory, we may also substitute the dot with the track to the directory with the Dockerfile):

$ docker build -t IMAGE .

If everything is fine, our image should be now visible after writing the command:

$ docker images

On this list, apart from the images created by us locally, visible are also the images downloaded from the remote repository (e.g. dockerhub.com service), such a download is performed automatically by our local build process at the moment when the Dockerfile in the FROM line comes across the name of the image which is not yet available locally. The downloading of the image from the remote repository may also be performed by us using the command:

$ docker pull IMAGE

For example, in order to download the official image of the MySQL server:

$ docker pull mysql

Deleting images from the "images" list is possible by the command:

$ docker rmi IMAGE

Containers

Starting a container needs, despite other parameters, providing the name of one of the images on our list of images (however there is a possibility of an automatic trial to download the image in case it is not on the list on our local driver). According to the documentation of the Docker system it is recommended that the containers are stateless – so in case of the possible deletion of a container we shouldn’t lose data / information which will definitely keep the order.

Let’s see how the start of an exemplary container using the "run" instruction looks like

$ docker run --rm IMAGE

The parameter "--rm" will delete the container right after finishing its work. Before we will start looking closely at it, let’s see how to “preview” the containers which exist / work in our system.

 
$ docker ps

It will show us the list of all the currently started containers

$ docker ps -a

It will show us the list of all the containers existing in our system (so also those which are made, but not yet deleted). If after starting our container we will check the list of the containers ("ps") then it should be visible on it. We can finish the work of the container using the command, for example:

$ docker stop CONTAINER

(CONTAINER should be found among the results of the command "ps" – by default, if we do not provide such name while creating a container, then it will be a combination of a random adjective with some well-known surname). Of course, the container started with the "--rm" parameter will not be visible on the "ps-a" list after finishing work – it will be deleted immediately.

The command "run" combines the functions of two different commands:

1. Creating a container

$ docker create IMAGE

2. Starting a container:

$ docker start CONTAINER

The command "start" can be used also for another start of the container previously stopped, if the container won’t be deleted (which is the reason why the containers don’t need to be necessarily stateless ;)).

If we would like to delete a container "manually", we can do that with one command:

$ docker rm CONTAINER

(the container must be stopped first!)

All of those examples above are related to the management of the container and they are about setting up a default network – so the legacy "bridge", this network will be always used if we do not specify differently.

The list of the possible options/parameters for the commands "create" and "run" is quite long – I recommend consulting the manual of those commands!

And again a word about the layers, this time in the context of containers – well, at the moment of the startup of the container, Docker adds another layer to the layers defined in the started image – this layer includes all the modifications of the file system made in the same container and is obviously destroyed along with the destruction of the container. This solution is probably the main reason for underlining the importance of the container statelessness – it is the image that should include all the data, because it is saved in the repository – there is no sign of the data "generated" in the container itself. So if we won’t take care of the statelessness of our containers, a part of their configuration will be in the image files and a part in the containers – this dualism may be tiring in the long term. Of course there are many types of temporary data for which the container "layer" is a perfect place, so using option "--rm" is not always an ideal solution. There is also a possibility of using volumes for storing data that must "survive" in case of destruction of the container (so the data of the DB servers, directory service, etc. may – and should! – be saved not in the container file system but on the volume (external concerning the container itself), the information of which is submitted to the container during its startup using "-v" parameter).

And that’s probably all about building and starting the containers, but of course these are only the basic information, there are far more options and possibilities!

Summary

For obvious reasons I couldn’t, in this short article, mention all the topics connected to Docker – I did not write about, for example, the docker-compose, which makes the management over a great number of containers a lot easier and frees us from the necessity to remember to write long commands for building the images or starting their particular instances (it can also manage the networks and volumes). It is an important tool the usage of which even in the small projects increases the clarity of configuration. I strongly recommend familiarizing with it. I use docker-compose in my own projects and I will certainly write about it in future!

Volumes – it is another omitted topic (except for a short note while describing the process of starting a container). In the local projects only the basic knowledge is sufficient in this area, I myself did not try the most advanced, than using a local directory, possibilities – the topic is to be examined, also on my part.

Somewhere in the article I have mentioned the differences in implementation of the Docker stack in the GNU/Linux environments and in OS X (I didn’t have the opportunity to study the version for Windows). Because Docker needs Linux (the core of the GNU/Linux system) to function, the implementation process in the OS X system uses one trick – Docker (along with Linux) is started in the virtual environment managed by the Hyperkit tool – it creates certain differences concerning the GNU/Linux counterpart. For me, the most "visible" difference is the more significant isolation of the Docker network in the OS X environment. In the GNU/Linux environment, the access to the containers is possible directly from the host system using the ip address of the container (if our container received 10.10.10.1 ip, then the command "ping 10.10.10.1" from the host system will work) – in the system OS X, the Hyperkit isolates the Docker network so efficiently that the direct communication host->container is not possible (however it is not so much of a problem as this direct communication is not necessary for most of the time – the access to the hosted services in the containers may be realized by the local interface, what is configured using the "-p" option while creating the container). If we want to be able to work on OS X like on GNU/Linux, I personally use the OpenVPN (the container with the server started inside the Docker network) and the Tunnelblick as a client in the local system – this however may seem to be a bit complicated (in realty it is complicated, but I did not find the simpler solution so far). As a point of interest I add this link to the article, the author of which notices that such an isolation is a matter of Hyperkit configuration and gives an advice to change it, but this is probably even more complicated than using the OpenVPN ;)

An interesting thing about Docker is also the situation when we want to start more than one Docker host as part of the Docker "cluster". I have the opportunity to experiment with such a configuration at work – surely I will share my experience/observation after implementing the Docker Swarm mode. For the time being in the aspect of the multi-host configuration I experiment with the Docker implementation of the VXLAN protocol. The aim of this exercise is to prepare the diffused Docker environment connected to the VXLAN protocol into one network (for creating and managing the hosts we use different, interesting technology: OpenStack). In the prepared environment all the containers, notwithstanding the host on which they are started, will work in one, internal network (which is to ensure the VXLAN protocol) – this will surely be described by me in more details, but now we have something more to test in order to be fully satisfied with the effect.

The containers, containers up to the horizon! Last autumn (at Linux Autumn conference) I had the opportunity to take part in the training where we learnt to use the functionality of Linux (again, I mean only the core of the GNU/Linux system) to create the containers without any "external" software like LXC or Docker. It turned out that it is possible, that everything is really consisted in kernel (in Linux), even the layered file system! These were really interesting trainings and I hope that I will find some time to reconstruct the examples of such building of the containers and that I will share it with you here, on this blog, however now, being at risk of using some empty words, I can only repeat – everything is in Linux! Docker is an idea realized by means of the mechanisms consisted in the core of the GNU/Linux system and even though the Docker developers performed well, we will surely be soon bombarded with the implementations of this container idea (for the time being, the Rocket has already accelerated and the CoreOS team wants to counterattack the Docker’s Enterprise activities.

At the very end I would like to mention one person because, to my surprise, I discovered that Ian Murdock was once engaged in the Docker project. Unfortunately not for long, only for almost two months until his suicidal death in December 2015. Earlier in the Sun Microsystems, Murdock was leading the Indiana project, the product of which was/is the OpenSolaris, and before Sun there was the CTO Linux Foundation – I didn’t know all of these facts about him, well, I have never checked what Wikipedia has to say about him. I personally associated Murdock only as a founder of the Debian distribution (it’s from his name – Ian – come the three last letters in the name of distribution, the first part "Deb" is associated with the name Debra). I fell sorry, very sorry for Mr. Murdock.

And I think that’s all, it turned out to be quite long, however I hope that it is not boring (and not too chaotic) – Docker is an endless topic!