Quantcast
Channel: JetsonHacks
Viewing all articles
Browse latest Browse all 339

Containers for Docker Intuitively Explained

$
0
0

Containers are an operating system feature that have a very intuitive explanation. Looky here:

Background

Let’s dust off the old computer science degree and go back to Operating Systems 101. We think about the computer as two different parts. First is the hardware, which is the physical part. We’re more interested in the internals. This consists of the CPU, main memory, secondary memory like SD cards and drives. There’s also networking hardware, internal devices like I/O controllers, and the various external attached devices like the keyboard, mouse, and cameras. Here we’re most interested in the CPU, main and secondary memory, and networking.

The computer runs software. We call that collection of software the operating system. Here we’re talking about a multi-user, multitasking operating system like Linux, Macintosh, Windows, and the like. While most of this discussion holds true for Mac and Windows, we’ll be thinking in Linux because that’s where container technology originated.

The Kernel: The Brain of the Operating System

Now, the OS itself has two major parts. First, there’s the kernel, which is like the brain of the OS. It handles process management — which simply means making sure programs run smoothly by organizing when and how they should be executed. Think of a process as a running program. The kernel also takes care of memory management, which is about deciding how much memory each program gets and ensuring they don’t interfere with each other. This is crucial to keep your computer from crashing or freezing up.

Then there’s the file management system, which organizes data on your drives, making sure files are stored securely and efficiently. It uses a system called the virtual file system, which is like a universal translator that allows different types of storage to work together. The kernel also handles networking, which is how your computer talks to other devices, using protocols like TCP/IP. It even manages security, ensuring that unauthorized users can’t do bad things.

Userland: The Everyday Software Environment

Next up, we have Userland —the part of the OS that most of us interact with every day. Userland includes all the software applications and tools that run on top of the kernel. While the kernel does the heavy lifting in the background, Userland is where developers get their hands dirty, using special instructions (called system calls) and libraries to interact with the kernel. This is also where you’ll find the root file system (rootfs), which is a hierarchy of files where the OS keeps all the critical files it needs to run.

Distributions: Tailored Operating Systems

When you combine a GNU Userland and a Linux Kernel, you get a distribution—think of it as a customized version of Linux like Ubuntu or Arch. There are hundreds of different distributions out there, each tailored for specific tasks. This is so that no matter which one you pick, other people will be able to tell you that you picked the wrong one.

Containers: A New Way of Packaging and Compute

Containers are a simple concept. The idea is that there is the original Userland, which we call Host. A container is a totally separate Userland which a Host can create and manage. This requires a lot of code under the hood. The idea is simple, but the devil is in the details!

The Role of the Kernel in Containers

The kernel plays a pivotal role in the creation and management of containers. It sits directly on top of the hardware, managing the underlying resources and ensuring that containers operate efficiently and securely. Here are the major features that the kernel implements to support containers:

Namespaces: This feature isolates system resources, allowing processes within a container to have their own independent view of the system. Think of it as giving each container its own private machine. Namespaces control resources like process IDs (PIDs), network interfaces, and file systems, ensuring that each container remains securely isolated from others and the host. This isolation is crucial for maintaining security and preventing containers from interfering with each other. Containers do not know about other containers on the system, or the Host. However, the Host knows about all of the containers.

Each namespace type has its own implementation. The pid namespace will create a new process tree for a Container when it is launched. A network (net) namespace will allocate additional network connections, and so on.

Control Groups (cgroups): While namespaces provide isolation, cgroups handle resource management. cgroups allocate and manage system resources—such as CPU, memory, and IO—among containers. This ensures that no single container can hog all the resources, preventing performance issues and maintaining system stability. Imagine cgroups as a traffic controller, ensuring that each container gets its fair share of resources without causing gridlock.

Union File System: This is where things get a bit more complex. The union file system allows multiple file systems to be layered on top of each other. Changes made within a container are written to an upper layer, while the original data in lower layers remains unchanged. This is essential for efficient storage use and enables containers to be created from base images with minimal duplication. Although the concept can be tricky to grasp, think of it as a stack of transparencies—each layer adds new details without altering the layers beneath. This is one of the trickier concepts to understand, but it makes sense after you study it.

A major takeaway is that the container is working directly with the running Kernel. The container is fast to start up once the container image has been downloaded. This is because of the minimal setup needed to get the container processes running, it’s basically the same as launching a program with just a little extra setup.

Userland Containers and Docker: Building on Kernel Features

Now, let’s shift our focus to Userland. At the core of container management is runc, the low-level container runtime. runc adheres to the Open Container Initiative runtime specification, meaning it follows a set of industry standards for how containers should operate. Its primary job is to create and run containers by setting up the necessary environments. This involves configuring namespaces for isolating processes, cgroups for managing how much of the system’s resources a container can use, and establishing the root file system that the container will rely on.

runc interacts directly with the Linux kernel to create these isolated environments, ensuring that each container operates in its own little world, separate from everything else on the system.

But runc doesn’t work alone. containerd is another crucial component that handles the lifecycle of containers. containerd manages how containers start, stop, and how they interact with each other. containerd also pulls Container images—essentially the blueprints for containers—from repositories, getting them ready for execution. To actually spin up these containers, containerd uses runc, ensuring everything is set up properly.

Docker

And this brings us to Docker, the platform that helps the user wield this magic. Docker packages applications and all their dependencies into lightweight, portable Containers that can run consistently across various environments, from your laptop to massive cloud servers.

Docker uses containerd to handle the lifecycle of these containers, taking care of tasks like starting, stopping, and supervising them. containerd is the workhorse that pulls the necessary images from repositories and uses runc to actually create and execute the containers. This layered architecture—from runc to containerd to Docker—ensures a robust and scalable solution for deploying applications. A user uses a tool called the Docker Client to manage all this goodness.

Conclusion

Containers are an important part of the modern computing ecosystem. They’ve been a staple in server environments for a decade, and there are many reasons to consider using them when developing sophisticated AI applications on Jetson. We’ll be looking into this in more detail in upcoming articles.

The post Containers for Docker Intuitively Explained appeared first on JetsonHacks.


Viewing all articles
Browse latest Browse all 339

Trending Articles