Linux Operating Systems for Kubernetes – OS Support

This is a complete guide to Linux Operating Systems for Kubernetes containers. Learn more about Container 101 Tutorials for Beginners: Kubernetes Technology. It also contains the list of Top 10 and best Linux Distributions.

Linux Operating system Description

The term Linux is a family of open-source Unix-like operating systems based on the Linux kernel. The operating system kernel first released on September 17, 1991, by Linus Torvalds. Linux is typically packaged in a Linux distribution. See more on Wikipedia.

  • Initial release date: 17 September 1991
  • OS family: Unix-like
  • Programming languages: C, Assembly language
  • Default user interfaces: Unix shell, KDE Plasma 5, MATE, Cinnamon, Unity, LXDE, elementary OS, Xfce
  • Update methods: KernelCare, dpkg, GNOME Software
  • Developer: Linus Torvalds, Skype Communications S.a r.l.

Tags: k3os, microk8s, coreos, talos kubernetes, rancheros, ubuntu, container os, kubernetes os support 2021/2022.

Now, you’re on board with Kubernetes (or thinking about trying any deployments of Kubernetes). There are several good reasons for this, which you already know well – Kubernetes takes care of container management, scheduling workloads on a cluster, automating rollouts and rollbacks, coping with scaling and replication.

Kubernetes is an infrastructure-neutral system, and it drives controlled elements to the desired state by using declarative statements defining the state the systems and application should be in. This results in an efficient and expandable system that is easier to handle.

Of course, this “ease of management” has a learning curve, but the advantages of modern container-based software creation on infrastructure that offers scalability and portability of infrastructure are well worth it.

Trending Topics on Kubernetics

While Kubernetes enables container operational scalability and management, it does not directly help you manage the infrastructure on which Kubernetes itself relies. Kubernetes is an application itself (or a group of applications), and it is everywhere that these applications need to run.

Kubernetes is not an operating system, notwithstanding what you may have read, but it still relies on Linux (or Windows) to be configured on the nodes.

Kubernetes can work on cloud providers such as AWS or GCE, or virtualization platforms such as VMware, on tools such as Docker inside laptops, or on bare metal server hardware using tools such as Sidero – but all of these still involve first installation of an operating system. (Some, like AWS EKS, eliminate the need to handle nodes in the control plane, but still require you to set up worker node Linux servers).

The emphasis is on Kubernetes operationally and the workloads it runs – as it should be! – however this leads to a problem often seen in implementations of Kubernetes.

Although Kubernetes can be patched and updated periodically (although it is often not, and left in a dangerous security “set it and forget” state), the maintenance, upgrades, security and operations of the underlying operating systems are often overlooked or ignored – at least until a security audit is due!!

I’ve heard SREs and system administrators often complain that having to handle both Linux and Kubernetes results in an extra task. As a generic Linux OS, Kubernetes needs patching, upgrading, locking, monitoring user access, and so on.

But just because such tasks are performed at the level of Kubernetes does not mean that they can be ignored at the level of the OS. The option of the correct underlying delivery of the operating system, however, will go a long way to reducing the workload in maintaining the OS and minimizing the consequences of not keeping up to date.

So, considering that you need to first install Linux to run Kubernetes on, and there will be consequences that flow from the underlying OS, which is Kubernetes’ best Linux? You may choose from a number of choices, but they typically fall into two types: tailored OSs for containers, or OSs for general purposes.

General Purpose Linux Operating Systems

These are the “normal” kinds of Linux.

Most individuals, including Ubuntu, Debian, CentOS, Red Hat Enterprise Linux (RHEL), or Fedora, would be familiar with running a general purpose Linux operating system.

That is one of the key benefits of running a general purpose OS under your Kubernetes cluster – how to install, upgrade and protect those Linux distributions will be familiar to your system administrators. It is possible to use existing tool sets for kickstarting servers, downloading the OS, and configuring it to a basic security level.

And if Kubernetes runs on top of these systems, existing patch management and vulnerability detection tools should run fine on these systems.

Notwithstanding…

It comes with a general purpose Linux system…. Linux management overhead for general purpose. This means the management of user accounts, patch management, kernel updates, service firewalling, SSH protection, root logins disabled, unused daemons disabled, kernel tuning, etc. all need to be done and kept up to date.

As noted, many of these tasks can be performed with existing resources (Ansible, Chef, Puppet, etc.) that can handle other servers, but updating manifests or control files to make Kubernetes master and worker nodes suitable for the server profiles…. Non-trivial, we will say.

The synchronization of the operating system modifications with the maintenance of Kubernetes is another issue. There is also little teamwork, so the operating system is left as it is after installation.

Kubernetes will (hopefully) be updated as time goes on, but the underlying operating system can be left unchanged, slowly accumulating a load of known CVE’s (common vulnerabilities and exposures) in the different packages and the kernel installed.

Kubernetes Devices

Ideally, you would like to coordinate the automation platform (like Ansible or Puppet) with Kubernetes so that the node operating system can be updated without interrupting the service of Kubernetes. This implies that a device must:

  • Cordon the node so that no new workloads on the node are planned.
  • Drain the node to transfer all the running pods to other nodes.
  • Update and Refresh and patch a node
  • Uncordon the node

And the device must, of course, ensure that not too many nodes are modified at once, such that the cluster’s workload ability is not adversely affected (nor too few nodes, so that the updating of a large cluster does not occur slower than patches and updates are released).

In order to avoid reboots and disruption, you will want to align OS updates with Kubernetes updates, but you may also need to support more urgent OS updates on short notice.

The great benefit of a general purpose Linux OS is the experience with it that workers can have. This means they would be familiar with deployment, but also with strategies for troubleshooting. They can use their standard operating system tools such as tcpdump, strace, lsof, etc. (and install if not already present).

In order to correct errors and to evaluate alternatives (something that is both a blessing and a curse!), settings can be easily modified. The drawback is the overhead device management that needs to be managed, the greater complexity and work required to protect the platforms, and the need to coordinate changes to the infrastructure and operations of Kubernetes.

Container Specific Operating Systems

The National Institute of Standards and Technology has a good overview that summarizes some of the benefits of identifying a Container Specific OS:

‘A container-specific host OS is a lightweight OS that is specifically configured to run only containers, removing all other services and functions, and utilizing read-only file systems and other hardening practices.

Attack surfaces are usually much smaller when using a container-specific host OS than they would be for a general-purpose host OS, so there are less ways to attack a container-specific host OS and compromise it. Organizations can also use container-specific host OSs wherever necessary.

https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-190.pdf

NIST Special Publication 800-190 Application Container Security Guide

To summarize the obvious, the less an OS runs software and packages, the less attack is required, and the less vulnerabilities are present. This makes the basic OS of the container substantially safer from the start, even without regular patching.

Other protection methods may also be employed by container-specific operating systems, such as having the root file system (or preferably all file systems!) read only, mitigating the impact any vulnerability can have.

Package managers usually do not run (or support) container-specific OSs. This decreases the probability of a package being installed or changed, creating a dispute that prevents a node or service from working. The lack of management resources like Chef and Puppet also decreases the probability that configuration changes or incomplete runs will adversely affect the system’s operational stability.

Instead, a complete OS image is mounted in an alternative boot mechanism with all updates and configurations applied, and booted in at the next reboot, dropping back to the good image previously known to function. This ensures that at any stage the node configuration is known exactly, and any version can be reverted to from the version control system in use.

Container Specific Operating Systems

Some Container Specific Operating Systems are similar to Linux distributions for general purposes – e.g. Compared to a standard Linux distribution, PhotonOS from VMware has a limited number of packages installed but still requires a package manager, SSH access, and does not mount file systems as read-only.

The fact that “cloud-optimized” versions of general purpose Linux systems are still general purpose Linux systems is one point that people often get confused. E.g. “Ubuntu launches “cloud images” that are “customized to run on public clouds by Ubuntu engineering”.

CoreOS was the first container-specific OS widely adopted, which popularized the concept of running all processes for extra protection and isolation in containers. To ensure that changes were atomic and could be rolled back, CoreOS removed the package manager and used rebooting into one of two read-only /usr partitions. However, since its purchase, CoreOS has been end-of-lifed by RedHat.

Current Container Specific OSs all take the role of being minimal (very few operating system installed packages); locked down (to some degree); run container processes (for better protection, reliability and service isolation) and provide atomic updates (by booting into one bootable partition, and updating the other). There are examples of:

  • Google’s “Container-Optimized OS“, which supports a read only root fs, but allows SSH and only runs in GCP
  • RancherOS, which runs SSH and does not use readonly file system to protect root.
  • K3os, is also by rancher, but does not run a full vanilla K8s distribution. Management is via Kubectl, but SSH is supported.
  • AWS Bottlerocket is another OS with immutable root fs and SSH support, that is, at least initially, focussed on AWS workloads.

Top 10 and Best Linux Distributions

Debian, Gentoo, Ubuntu, Linux Mint, Red Hat Enterprise Linux, CentOS, Fedora, Kali Linux, Talos OS, Arch Linux, OpenSUSE etc are amog the most common of the Container Specific Operating Systems, is an outlier. Most of them are lightweight, uses only read-only file systems (except /var and /etc/kubernetes, and one or two special writeable but ephemeral files (reset on reboot) such as /etc/resolv.conf) with no package manager, and is incorporated with K8s for updates via an update controller.

However, by eliminating all SSH and console access, and making all OS access and management API oriented, any of the above mentioned Linux OS takes the idea of immutable infrastructure further than other OS’s.

On a node running Kubernetes, there are API calls for all the stuff you might like to do – display all the containers, check the network set up, etc. – but no way to do things you shouldn’t be doing on a node, such as unmounting a file system. In order to do only one thing, the specific OS also chose to fully rewrite the Linux Init system – launch Kubernetes.

No user-defined services (all of which should be handled by Kubernetes) can be managed. This further increases exposure to security (no ssh, no console), reduces maintenance (no users, no patching), and reduces the impact of any CVE (as file systems are immutable and ephemeral.)

You may not agree that it is ideal to give up SSH access, restrict the behavior of SRE’s, and force nodes to be completely immutable – but that was also the case not too long ago against immutable containers, so it’s worth looking at.

Having an OS operated by an API often lends itself very well to large-scale operations and management – whether you need to inspect the logs on one node, one class of nodes, or all nodes for a specific container, it’s the same API call with different parameters.

Summary

If you have followed the container management view of cattle-not-pets – destroying a container and releasing a new one when an upgrade or patch is to be deployed – then it makes sense to ensure that the infrastructure that supports the containers adopts the same approach.

Adopting the paradigm that your nodes should be handled similar to containers, destroyed and reprovisioned for updates rather than patching can take a little education, but adopting a Container Specific OS helps drive this adoption, reduces administrative overhead, and improves protection.

Container Specific Operating Systems often aid with operational stability – without the need to adjust a config to “just get it working” for a sysadmin or developer, the risk of human errors or misconfigurations breaking the next update is removed.

Because many businesses are still early in their lifecycle of Kubernetes adoption, it is now a good time to get acquainted with this next generation of operating systems. It is possible to treat the entire Kubernetes cluster as a device, decrease the amount of overhead, and facilitate better protection by tightly enmeshing the OS with Kubernetes.

This allows the emphasis to stay on the workloads and value generated by the computing infrastructure, and is another step towards the datacenter powered by the API.

Related Topics to Kubernetes

Kubernetes – Guided Configuration

Managed Kubernetes Designed for You and Your Small Business. Get Started w/ a Free Credit. Develop Rapidly With Easy Deployment, Release Updates, & Management of Your Apps. Flexible API. Expandable Storage. Free Monitoring & Alerts. High-CPU Servers.‎ Kubernetes CI/CD Pipeline. ‎Pricing Calculator. ‎Kubernetes | $100 Credit · ‎Tutorial Library. Virtual Private Server.

Exit mobile version