This blog was written by an independent guest blogger.
Organizations are increasingly turning to Kubernetes to manage their containers. As reported by Container Journal, 48% of respondents to a 2020 survey said that their organizations were using the platform. That’s up from 27% two years prior.
These organizations could be turning to Kubernetes for the many benefits it affords them. As noted in its documentation, Kubernetes comes with the ability to distribute the container network traffic so as to keep organizations’ applications up and running. The platform is also capable of moving the actual state of any deployed containers to a desired state specified by the user as well of replacing and killing containers that don’t respond to a health check.
The double-edged growth of Kubernetes clusters
The benefits mentioned above trace back to the advantage of the Kubernetes cluster. At a minimum, a cluster consists of a control plane for maintaining the cluster’s desired state and a set of nodes for running the applications and workloads. Clusters make it possible for organizations to run containers across a group of machines in their environment.
There’s just one problem: the number of clusters under organizations’ management is on the rise. This growth in clusters creates network complexity that complicates the task of securing a Kubernetes environment. As StackRox explains in a blog post:
That’s because in a sprawling Kubernetes environment with several clusters spanning tens, hundreds, or even thousands of nodes, created by hundreds of different developers, manually checking the configurations is not feasible. And like all humans, developers can make mistakes – especially given that Kubernetes configuration options are complicated, security features are not enabled by default, and most of the community is learning how to effectively use components including Pod Security Policies and Security Context, Network Policies, RBAC, the API server, kubelet, and other Kubernetes controls.
The last thing that organizations want to do is enable a malicious actor to authorize their Kubernetes environment. This raises an important question: how can organizations make sure they’re taking the necessary security precautions?
Look to the Kubernetes API Server
Organizations can help strengthen the security of their Kubernetes environment by locking down the Kubernetes API server. Also known as kube-apiserver, the Kubernetes API server is the frontend of the control plane that exposes the Kubernetes API. This element is responsible for helping end users, different parts of the cluster and external elements communicate with one another. A compromise of the API server could enable attackers to manipulate the communication between different Kubernetes components. This could include having them communicate with malicious resources that are hosted externally. Additionally, they could leverage this communication channel to spread malware like cryptominers amongst all the pods, activity which could threaten the availability of the organization’s applications and services.
Fortunately, organizations can take several steps to secure the Kubernetes API server. Presented below are a few recommendations.
Stay on top of Kubernetes updates
From time to time, Kubernetes releases a software update that patches a vulnerability affecting the Kubernetes API server. It’s important that administrators implement those fixes on a timely basis. Otherwise, they could give malicious actors an opportunity to exploit one of those weaknesses and gain access to the API server.
Ensure all API traffic is TLS-encrypted
The API serves on port 443 in a typical cluster and presents a certificate for Transport Layer Security (TLS) protection. Administrators have the option of having this certificate linked to a well-known Certificate Authority (CA) or a private CA. If the latter, administrators need a copy of that certificate configured in their ~/.kube/config on the client. This will help testify to the certificate’s confidentiality.
Configure authorization that determines permissions
At this step, administrators can use a series of commands to harden the Kubernetes API server. They can look to these commands in particular:
- Ensure that the --anonymous-auth argument shows as false. In using this command, administrators can disallow anonymous requests to the secure port of the API server. These types of requests result when another authentication method doesn’t reject them.
- Verify that the --basic-auth-file argument isn’t there. Administrators don’t want basic authentication active. It uses plaintext credentials instead of tokens or certificates in order to authenticate a user.
- Along those same lines, make sure that the --insecure-allow-any-token argument isn’t there. Doing so will disallow insecure tokens.
- If the --kubelet-https argument is there, check to see that it always shows as true. Administrators who follow this step will guarantee that connections between the API server and kubelets are protected in transit with TLS.
- Confirm that the --repair-malformed-updates argument shows as false. If administrators want to keep the API server secure, they need to make sure that this part of the control plane does not accept intentionally malformed requests from clients.
Administrators can find more useful commands for securing their organization’s Kubernetes API server in the platform’s documentation here.