Back to top

Linux Namespaces

My first encounter with containers in the Linux edition was greatly "burdened" with the previous experiences with virtualization. The containers were another, after paravirtualization, step towards making the visualization even "lighter" and less resource consuming! I thought they were. When my colleagues experimented with the duet vagrant + virtualbox I tried to experiment with Lxc but I was not especially interested with how the Lxc works – it was just "a virtual environment". A moment of "enlightenment" came during the "Linux Autumn 2016". I participated there in workshop concerning Linux Namespaces – it was really interesting! However after the conference I put this subject aside. Sometime in 2017 I looked through the recordings from DockerConf 2015 where I found a recording called: Cgroups, namespaces, and beyond: what are containers made from? – it reminded me of the "Linux Autumn" conference and one of my "post-autumnal" resolutions: to look at Namespaces more closely!

Linux Namespaces (NS) is a product of the 21st century. The first namespace appeared around 2001 or 2003 (probably Wikipedia knows exactly when :)) all the currently defined NS are in Linux since the version 3.8 (so for a while now). I will not copy the information from Wikipedia here (link above – I recommend this page), for me Namespaces are mostly the mechanism thanks to which the idea of containers in the GNU/Linux systems is implemented.

How do a tool such as Lxc or Docker look like in this context? If we look more closely at their operation, it will turn out that we can, to some extent, regard them as the high-level tools simplifying managing NS by defining structures called containers. Of course, it is quite a simplified image, but it is true :) After discussing NS I will show you how to “connect” with the Lxc or Docker container without using client’s Lxc/Docker but using only tools from userspace used for managing NS. At the end of this article I will go back to this (simplified) outlook on Lxc and Docker (as the high-level tools for managing NS), however here I am going to mention a few things:

  1. I am a Docker enthusiast! The presented outlook on containers in the context of NS is not based on the willingness to show that it can be done differently – absolutely not! This outlook is an opportunity to talk about the low-level mechanisms, no one (especially me) wants to resign from the high-level tools! (the low-level mechanisms mean a lot of problems!)
  2. Namespaces are the mechanisms implemented in Linux and the “containers” are an idea realized (in Linux) while using NS – which makes the Docker versions known from the GNU/Linux systems in the Windows or MacOS system work in the virtual machines (Virtualbox, HyberV, or HyperKit) – they have to do so this because Docker and Lxc need Linux to function ( "compiling" Docker for other system does not work by itself!)
  3. In connection to point 2 our containers are not (theoretically) dependent on a particular supplier of the high-level solutions (Docker, Lxc) – with time there will be more implementations and interesting projects connected to managing containers understood as the groups of namespace – see e.g. rkt – that is why this technology seems to have a good future :) (I will write about some interesting, in my opinion, futuristic scenario in the summary)

OK, let’s go straight to the point!

The current NS list contains seven positions. Later I will discuss 4 of them in details and I will mention three of them (cgroups, userId, ipc). A page in the system manual (man) mentions a configuration setting for each of the NS responsible for turning on a given NS in the Linux binary code. It means that NS (all or part of them) will not be available in Linux > 3.8 if they are not added on purpose in the compilation process, even though in the default, distributional compilations they are mostly added (at least in such distributions as Debian or Centos, however it is worth mentioning that in case of Centos we have six, not seven, NS available in the default, distributional kernel). It is easy to check which NS have been added (during compilation process) – I will show how, now I want to just add that after compiling a given NS we don’t have to "activate", "turn on" or "start" it in any way – each compiled namespace is active from the start of the Linux operation.

What does it specifically mean? NS are connected to processes, there is no NS where no process "works" (or to which it is not attributed to). Imagine a tree of processes (similar to that which you can see after the command “ps axf’), process with ID = 1 (init/systemd) is started as the first one and – what is the most important – it is started with attributed id for each compiled NS. Those identifiers can be checked in few ways, let’s begin with the most "basic" one – from checking the information in the "/proc" directory. Without going into details on "/proc" (the information can be found in the Intenet), in this situation it is important that every process in the system has a subdirectory in the proc folder and among different information (files and folders) in (this) process subdirectory we can find also the folder named "ns" which obviously contains information on NS ids attributed to a given process. So the directory "/proc/1/ns" contains the information on NS ids with which the process with PID = 1 has been started (and because the process with PID = 1 is the first process started in the system, NS attributed can be treated as “default”):

$ sudo ls -l /proc/1/ns 
total 0 
lrwxrwxrwx 1 root root 0 Aug  4 18:18 cgroup -> cgroup:[4026531835] 
lrwxrwxrwx 1 root root 0 Aug  4 18:18 ipc -> ipc:[4026531839] 
lrwxrwxrwx 1 root root 0 Aug  4 18:18 mnt -> mnt:[4026531840] 
lrwxrwxrwx 1 root root 0 Aug  4 18:18 net -> net:[4026531957] 
lrwxrwxrwx 1 root root 0 Aug  4 18:18 pid -> pid:[4026531836] 
lrwxrwxrwx 1 root root 0 Aug  4 18:18 user -> user:[4026531837] 
lrwxrwxrwx 1 root root 0 Aug  4 18:18 uts -> uts:[4026531838] 

Each process is connected with exactly one NS of a given type (above you can see that in this system all 7 NS have been "compiled"). There is no possibility to have a given process connected to two NS of the same type. NS (their identifiers) are inherited after the parent process, unless while starting a process we explicitly indicate that we want to create / use new NS for the sake of this particular process.

This will surely, based on the examples below, be presented much simpler.

One more notice: the name of the Namespace refers both to the particular NS (particular identifiers, which there can be plenty of, because there can be many defined NS – many identifiers – as part of each of 7 defined types) or to the "type" itself (there are currently 7 of them) – it does not make it simple and we need to look at the context.

How to check which NS we have at our disposal? Of course checking the "ns" directory for any process is a solution, but we have also quite a helpful command:

$ lsns

Lsns will display a list of NS with which the current process works (so the shell process from which we make a command) – because, as I wrote, each process has an attributed identifier for each of the available NS, the list will show exactly which NS we have at disposal (so maximally 7 of different NS when all of them have been added during compilation):

$ lsns 
4026531835 cgroup      4   421 lukasz /lib/systemd/systemd --user 
4026531836 pid         4   421 lukasz /lib/systemd/systemd --user 
4026531837 user        4   421 lukasz /lib/systemd/systemd --user 
4026531838 uts         4   421 lukasz /lib/systemd/systemd --user 
4026531839 ipc         4   421 lukasz /lib/systemd/systemd --user 
4026531840 mnt         4   421 lukasz /lib/systemd/systemd --user 
4026531957 net         4   421 lukasz /lib/systemd/systemd --user 

In the column "COMMAND" we can see a command for which a given NS has been created (so after which we inherit it, not necessarily indirectly). I need to make it clear that our current permissions influences the result of the command lsns, e.g. in the example above the process with pid 4 inherits all NS after the process from pid 1 – however our user does not have the same information. Nevertheless, if we have a possibility to execute the command with the administrator’s permissions, we can quickly check it, let’s begin with entering the PID identifier of the shell process in which we are currently working:

$ echo $$ 

In the shells bash/zsh the command "echo $$" will return to us pid of current shell process. Now we can execute the command:

$ sudo lsns –p 1234 
4026531835 cgroup    233   1 root /sbin/init 
4026531836 pid       230   1 root /sbin/init 
4026531837 user      233   1 root /sbin/init 
4026531838 uts       230   1 root /sbin/init 
4026531839 ipc       230   1 root /sbin/init 
4026531840 mnt       222   1 root /sbin/init 
4026531957 net       229   1 root /sbin/init 

We used the administrator’s permissions and we received more detailed information on a process after which we inherit namespaces.

"sudo lsns" itself can help us display all the currently active NS, the difference between it and the previous command (without the administrator’s permissions) is that the list will include also NS defined in the context of other processes (if there are any), e.g.:

$ sudo lsns 
        NS TYPE   NPROCS   PID USER             COMMAND 
4026531835 cgroup     72     1 root             /sbin/init 
4026531836 pid        72     1 root             /sbin/init 
4026531837 user       72     1 root             /sbin/init 
4026531838 uts        72     1 root             /sbin/init 
4026531839 ipc        72     1 root             /sbin/init 
4026531840 mnt        69     1 root             /sbin/init 
4026531857 mnt         1    13 root             kdevtmpfs 
4026531957 net        72     1 root             /sbin/init 
4026532108 mnt         1   203 root             /lib/systemd/systemd-udevd 
4026532156 mnt         1   317 systemd-timesync /lib/systemd/systemd-timesyncd 

Above we can see that in the Debian GNU/Linux system the process kdevtmpfs (pid 13), and the processes started with pid 203 and 317 – define their "own" NS "mnt". By checking e.g. process 317, we see that this process, outside NS "mnt" (defined specially for it) uses "default" NS (defined for the process with pid 1):

$ sudo lsns -p 317 
        NS TYPE   NPROCS   PID USER             COMMAND 
4026531835 cgroup     72     1 root             /sbin/init 
4026531836 pid        72     1 root             /sbin/init 
4026531837 user       72     1 root             /sbin/init 
4026531838 uts        72     1 root             /sbin/init 
4026531839 ipc        72     1 root             /sbin/init 
4026531957 net        72     1 root             /sbin/init 
4026532156 mnt         1   317 systemd-timesync /lib/systemd/systemd-timesyncd 

Let’s look at each NS a bit closer :)

Namespace Mount (mnt)

For a short presentation of the functioning of this NS I will use the command "chroot" – “chroot” itself doesn’t have anything to do with the namespaces (at least nothing I know of), but I hope that a short example of usage chroot without or with "mnt" namespace will show possibilities of the namespaces itself.

Executing the first command makes the shell process started with the directory “/directory” set as the main file system (in the context of only this new shell process):

$ chroot /directory

The above will work, but if it is to make sense (for example) "/directory" must include a basic file system with commands such as “ps” (and others) which we will use. How to prepare such a file system in a directory? In Debian we can use a debootstrap, in other distributions you have to do it by yourself or you have to omit using „chroot” and just improvise a bit.

When the file system is ready and we already have executed the chroot command, we can continue:

(chroot)$ ps ax

It turns out that the above will not work, so we must first mount "/proc":

(chroot)$ mount -t proc proc /proc

Now "ps" works, however when we will have finished playing with chroot:

(chroot)$ exit

It will turn out that we must also clean up, because "proc" in "/directory/proc" is still mounted (we can check it by activating the command "mount" without any parameters or just by checking the content of /directory/proc):

$ sudo umount /directory/proc

And now let’s try to make it simpler by using "mnt" namespace:

$ sudo unshare -m chroot /directory

Unshare is a command that lets us create a new NS (we will need the administrator’s permission, that is why we use "sudo"). The command "unshare" as a statement assumes a command that needs to be started in a freshly created namespace(s) (similar to "chroot" – but in a different context). Let me remind: each process has an attributed id for each of the namespaces defined in the system – here all of them except for NS "mnt" will be inherited from the parent’s process, however NS "mnt" will be defined specially for a given process and currently it will be used only by this process.

After a standard:

(unshare)$ mount -t proc proc /proc

We can check a few things:

  1. lsns will show us a new namespace mnt attributed to our process of our new shell (we can check it through PID – "echo $$" will display the process identifier - "pid" - of the shell)
  2. "/proc" from within the chrooted environment will not be visible in the host system ("mount" or "ls /directory/proc" will not show us anything – our chrooted "/proc" is mounted in another namespace and isolated from the "default" "mnt" NS
  3. Finishing work with chrooted shell process will cause deleting a newly created NS (if no other process is using it) and automatic unmounting of the “/proc” system mounted "inside" chroot

However before we finish experimenting with chrobot and NS (point 3 above), I would like to present a third, and last, command which helps us working with namespaces – nsenter – this command lets us “connect” with any NS (or a group of NS). Let’s assume that we have the same situation as the one above, after executing the commands:

$ sudo unshare -m chroot /directory 
(unshare)$ mount -t proc proc /proc 
(unshare)$ echo $$ 

Now, in another terminal we execute the command:

$ sudo lsns 
4026532373 mnt         1 1234 root             /bin/bash -i 

Let’s assume that we would like to "connect" to this "mnt" namespace with another shell process – for that purpose we could use the command nsenter.

$ sudo nsenter –m –t 1234

And that’s it :) The shell process that will be started after the command will use the same "mnt" NS as the process 1234 – we can check it by, for instance, looking at the mounting points:

(nsetner)$ mount 
proc on /.../directory/proc type proc (rw,relatime) 

In this particular case (after executing the command "nsenter" above) we are connected to NS, but we do not work in the chrooted environment – obviously it has consequences – I recommend experimenting with the above environments – that one created after the command "unshare + chroot" and "nsenter"!

Namespace UTS

UTS means Unix Timesharing System, and NS itself is responsible for “isolating”… hostname (plus, however no longer important, NIS). It is worth noticing that both a page of man manual and the page in wikipedia concerning NS do not mention the full name of UTS – so the full name, because of its “little” preciseness, is rather not used.

Let’s create a new UTS NS:

$ sudo unshare -u

And for greater clarity of the description let’s look at PID, with which our new namespace is “connected”:

(unshare)$ echo $$ 

Namespace inherits the "hostname" after a parent, so:

(unshare)$ hostname 

should give us the same effect as in the superior NS UTS (so in our host system).

Now let’s change the hostname:

(unshare)$ hostname foo

Now, the command "hostname" in this particular NS will show a new name, if in a separate console we will check the “hostname” for our host system (default UTS NS) we will see that it was not changed. Let’s pay our attention to the fact that the “command prompt” in our shell process with PID = 1234 (if the “hostname” is visible there) – remains unchanged, however it does not concern NS. If in our shell with PID = 1234 we start another shell process:

(unshare)$ bash

The command prompt will contain a "correct" hostname for this NS, so: "foo".

Also, is we use the command "nsenter" for connecting another shell process to this namespace:

$ sudo nsenter -u -t 1234 

the command prompt will contain a changed "hostname" – it’s just that bash (used here) does not update the "hostname" dynamically.

Namespace Process ID (pid)

Both this one and the next NS (User ID) can be "nested" and in case of those two NS (PID and User ID) the process of mapping the identifier from superior NS (parent) on the identifier in secondary NS is conducted in the background. Let’s look at a simple example:

$ sudo unshare -p -f  
(unshare)$ echo $$ 

Two things are worth mentioning:

  1. Pay attention to an additional parameter “-f” of the command “unshared” – it is used exclusively in connection with the parameter “-p” and needed for correct functioning of the process in a newly created namespace (try to skip it and see what will happen at the moment of starting any command in namespace created in such a way) – a detailed explanation of why this parameter is required you can find e.g. here
  2. The command "echo $$" (as we know) returns PID of started shell process – in this case PID = 1!

It seems that everything works fine, but when we execute the command "ps" it turns out that we see many more processes than we could have expected! However it is the result of how the command "ps" works – it does not take the information on processes from the current NS, but from the "/proc" directory! So from the point of view of "ps" it doesn’t change anything if in our secondary NS we use the same "/proc" as in the superior NS (just like in this case).

How to correct it? Let’s use an additional namespace "mnt" and the already "tested" command "chroot".

$ sudo unshare -p -f -m chroot /directory 
(unshare)# mount -t proc proc /proc 
(unshare)# echo $$ 
(unshare)# ps ax 
    1 ?        S      0:00 /bin/bash -i 
    4 ?        R+     0:00 ps ax 

Now it is just as it should be! :)

Let’s look at example of another nesting:

(unshare pid + chroot)# unshare -p -f  
(1) mesg: ttyname failed: No such file or directory 
(unshare pid + chroot)# echo $$ 
(2) 1 
(unshare pid + chroot)# ps axf 
    1 ?        S      0:00 /bin/bash -i 
    5 ?        S      0:00 unshare -p -f 
    6 ?        S      0:00  \_ -bash 
   11 ?        R+     0:00      \_ ps axf 

So in our "unshared" and "chrooted" environment we create another nesting of PID NS – we see an error message (1) that can be ignored. (2) A new shell process created in new PID NS has PID = 1 (which is shown by echo command), however the command "ps" (that we tested) identified this process by the identifier attributed to it in the "/proc" directory – here as the process with PID = 6 (3).

At the end let’s try to "connect" (using nsenter) to PID NS created by the first command "unshare" – first we need to determine PID of the shell process started in this NS:

$ sudo lsns
4026532455 mnt         4 27902 root             unshare -m -f -p chroot FS_AA 
4026532456 pid         2 27903 root             /bin/bash -i 

So PID that interests us is 27903 – now let’s try:

$ sudo nsenter -p -t 27903 
(nsenter)# echo $$ 

Obviously, the command nsenter starts a NEW shell process, that’s why its PID does not equal "1" anymore, but it is still PID that makes sense only in PID NS of the process 27903.

Namespace User ID (user)

Here I recommend doing your own experiments, functioning of this NS is similar to PID NS, here we also deal with a possible nesting. Docker and Lxc seem not to use this NS (at least in environments which I had an opportunity to work with).

Namespace Interprocess Communication (ipc)

NS that let us isolate the system (Linux) structures for inter-process communication (semaphores) – until now I haven’t had an opportunity to use it, so I have no experience in this field – I leave this topic open and for self exploring! :)

Namespace Network (net)

Network NS is a quality itself :) It may be the only NS (maybe except for "mnt") the independent use of which could be not only "demonstrative", but also completely practical. A great example of this is the Mininet project!

In order to present you a part of possibilities that give us net namespace, we must first get to know the veth driver:

$ ip link add ve0 type veth peer name ve1 

The command above will let us create two new network interfaces in our GNU/Linux system – ve0 and ve1. Those interfaces are already connected to each other, which we can imagine as a connection of two network cards - in the same machine - with a network cable (which can look weird, but in this case it makes/will make sense).

By default our new interfaces are neither configured nor turned on, so in order to use them we must rise them up (additionally I will copy the ip address to one of them):

$ ip address add dev ve0 
$ ip link set ve0 up 
$ sudo ip l s ve1 up 

Now we can test if it works. By checking a current routing table („ip r”) we should already see a route to the network "", so let’s make it in one console:

(console 1)$ ping

And in another terminal / console:

(console 2)$ tcpdump -i ve1 -n -e arp

The command „ping” will report errors, because the address does not exist (so we will not have received a correct answer on the icmp protocol level), but tcpdump (monitoring ve1) will indicate the movement in layer 2, more precisely an enquiry ARP sent from ve0: "who has" – so the connection works!

Why didn't we assign the address to ve1? Well… try it if you want to :-D

Now the trick – let’s move one of the interfaces to a separate NS (for now they exist in the same one). In one of the terminals we will create a new net NS thanks to the command unshare, and then we will check “pid” of the shell process started in this NS (it will be substituted in the next command under ${NETPID})

(console 1)$ unshare -n && echo $$

We can still check the list of interfaces in a newly created namespace ("ip l") – it should have only the loopback interface. Now let’s "move" one of the interfaces created before to a recently created namespace:

(console 2)$ sudo ip link set ve1 netns ${NETPID}

The command „ip l” in each of the terminals should confirm the change – the interface ve1 "vanished" from the default namespace (console 2) and showed up in a newly created namespace attributed to the $NETPID process (console 1). The moved interface has been however reset, so we must raise it again (and, this time, let’s attribute ip address to it):

(console 1)$ ip a a dev ve1 
(console 1)$ ip l s ve1 up 

And final test:

(console 1)$ tcpdump -i ve1 -n -e arp 
(console 2)$ ping 

Now "ping" should work – there’s power in it! :)

Namespace Control group (cgroup)

Somewhere I came across a statement that the containers built on two pillars – Namespaces and Control Groups (cgroups) where NS responds to what the process “sees”, and cgroups for what the process can “do”. It is probably true, but sounds a bit weird if we notice that cgroups are also one of NS (what is more, it seems that if we use SELinux, the cgroups lose significance – see Centos the distributional kernel of which doesn’t have a compiled NS cgroups at all). There is probably a lot to explain / check – personally I (for now) give up on the topic of cgroups… maybe I will return to it another time.

Docker & Lxc

That’s everything on namespaces – now let’s see how it looks like in practice on the example of Docker and Lxc. Let’s begin with Docker – if in our system we have any docker container started, we can check its PID:

$ docker inspect CONTAINER | grep -i pid


$ docker inspect ba26bbd2de76 | grep -i pid 
            "Pid": 4542, 
            "PidMode": "", 
            "PidsLimit": 0, 

Then we can check the identifiers NS used by the main process working in a container:

$ lsns -p PID

Now the information for the users of MacOS (OS X) or Windows systems (or just everyone other than GNU/Linux) – in our case Docker will work in a virtual machine, so the process indicated by the command “docker inspect” will not be visible in your local system (host system) – because it is started in this virtual machine (hyperkit, hyperv or VirtualBox). Unfortunately, if by chance in your system the process with a given identifier PID works – it is not the process that has anything to do with our container. In MacOS system, if we use hyperkit, so the currently recommended solution, we must connect to the console of this hypervisor in order to continue our example. We can do this e.g. by screen command:

$ screen ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty

(the path to the file/symlink tty can be different depending on the version of the Docker for Mac app, so if you can't find it in indicated location, please check its subdirectories).

In the case of VirtualBox I think that it is much easier and everyone will deal with it :)

When we finally work in the console of the system, in which the process of our container has been run, we should be able to find it on the list of processes:

$ ps ax | grep 4542 
 4542 root       0:00 python2 

Then the command "lsns -p PID" should give us the result similar to the one below:

$ lsns -p 4542 
4026531835 cgroup    219     1 root /sbin/init text 
4026531837 user      220     1 root /sbin/init text 
4026533249 mnt         2  4542 root python2 
4026533250 uts         2  4542 root python2 
4026533251 ipc         2  4542 root python2 
4026533252 pid         2  4542 root python2 
4026533254 net         2  4542 root python2 

An interesting case is that Docker does not use NS „cgroup” and „user Id” – in the list above we can see that NS are „inherited” after the process init/systemd.

Then we can do our main “trick” – “connect” to the console of this container thanks to command nsenter:

$ nsenter -m -u -p -n -i -t 4542 bash

Where 4542 is obviously an appropriate PID indicated by the command „docker inspect”! Of course the bash shell must be available in our container. The result generally should be identical to the one received after executing:

$ docker exec CONTAINER bash

Fun, isn’t it? :)

In the case of Lxc the information on a given container (and PID) can be acquired by the command:

$ sudo lxc-info -n CONTAINER

After checking the NS ("lsns -p PID") it will turn out that LXC uses cgroups!

$ sudo lxc-info -n deb01 | grep -i pid 
PID:            9867 
$ sudo lsns -p 9867 
4026531837 user       79     1 root /sbin/init 
4026532169 mnt         9  9867 root /sbin/init 
4026532170 uts         9  9867 root /sbin/init 
4026532171 ipc         9  9867 root /sbin/init 
4026532172 pid         9  9867 root /sbin/init 
4026532174 net         9  9867 root /sbin/init 
4026532230 cgroup      9  9867 root /sbin/init 

An interesting fact is also that the process 9867 (so the main process of the LXC container) is also „init” – it results from the fact that Lxc makes the so-called "system" containers available (which contains all the process regularly started in each system – that’s why Lxc is much more similar to the virtualization). however Docker shared the so-called “application” containers – more about Docker you can read in one of the previous articles.

"Connecting" to the Lxc container looks identical to the Docker container (notice the additional parameter “-C” meaning NS cgroups):

$ sudo nsenter -m -u -C -i -p -n -t 9867

It works and I encourage you to carry out your own experiments!


I wrote above that the tools like Docker or Lxc are like the high-level tools used for managing NS. The command "nsenter" lets us e.g. connect the "low-level" one with any container. Of course, as I mentioned, I do not intend to encourage anyone to use nsenter in everyday work – it is not the case! I rather meant a general, simplified image of interactions between those solutions and Linux.

Another thing is that, as in probably any case of reaching to the lower levels in the technological stack, also here we gain new, better possibilities! Thanks to the tools like Docker or Lxc we work only on defined NS groups (creating “containers”) – for tools like nsenter/unshare the idea of “container” is unknown – here we work exclusively on NS. What can it mean? Let’s imagine two containers:

$ docker inspect ba26bbd2de76 | grep -i pid 
  "Pid": 1111, 

$ docker inspect 425d479f0666| grep -i pid 
  "Pid": 2222, 

And now let’s imagine (or test in practice) the effect of the following commands:

$ sudo nsenter -n -i -t 1111 
(nsenter)$ nsenter -m -u -t 2222 
(nsenter)$ echo $$ 

What is process "3333"? A new container using NS net and ipc from the container 1111 and NS mnt and utc from container 2222 (and „default” NS pid)? Or have we mixed the two containers in this way? An interesting result, is it not? :D

The fact above is so interesting that I wonder whether an idea of containers (the way they are implemented by Lxc or Docker) will last longer. Maybe soon we will stop using the abstract structures of “containers” and we will work directly with NS? It will surely demand breaking (in the range of those “post-container” solutions) with the virtualization, but do we really need such “connections”? Probably the most important will be the users' "awareness" – whether they will be interested in NS as a completely independent solution. We will wait and see! :)