Tips for minimizing security risks in your microservices

March 4, 2021  |  David Bisson

This blog was written by an independent guest blogger.

Organizations are increasingly turning to microservices to facilitate their ongoing digital transformations. According to ITProPortal, more than three quarters (77%) of software engineers, systems and technical architects, engineers and decision makers said in a 2020 report that their organizations had adopted microservices. Almost all (92%) of those respondents reported a high level of success. (This could explain why 29% of survey participants were planning on migrating the majority of their systems to microservices in the coming years.)

Containers played a big part in some of those surveyed organizations’ success stories. Indeed, 49% of respondents who claimed “complete success” with their organizations’ microservices said that they had deployed at least three quarters of those microservices in containers. Similarly, more than half (62%) of the report’s participants said that their organizations were deploying at least some of their microservices using containers.

The benefits and challenges of microservices

Microservices present numerous opportunities to organizations that adopt them. They are smaller in size, notes Charter Global, which makes it possible to maintain code and add more features in a shorter amount of time. Organizations also have the option of deploying individual microservices independently of one another, thereby feeding a more dynamic release cycle, as well as of scaling these services horizontally.

Notwithstanding those benefits, microservices introduce several security challenges. Computer Weekly cited complexity as the main security issue. Without a uniform way of designing them, admins can design microservices in different environments with different communication channels and programming languages. All of this variety introduces complexity that expands the attack surface.

So too does the growing number of microservices. As they scale their microservices to fulfill their evolving business needs, organizations need to think about maintaining the configurations for all of those services. Monitoring is one answer, but they can’t rely on manual processes to obtain this level of visibility. Indeed, manual monitoring leaves too much room for human error to increase the level of risk that these services pose to organizations.

Kubernetes as an answer

Fortunately, Kubernetes can help organizations to address these challenges associated with their microservices architecture. Admins can specifically use the popular container management platform to maintain their microservices architecture by isolating, protecting and controlling workload through the use of Network Policies, security contexts enforced by OPA Gatekeeper and secrets management.

Kubernetes network policies

According to Kubernetes’ documentation, groups of containers called “pods” are non-isolated by default. They accept traffic from any source in a standard deployment. This is dangerous, as attackers could subsequently leverage the compromise of one pod to move laterally to any other pod within the cluster.

Admins can isolate these pods by creating a Network Policy. These components work by restricting the types of connections on one or more selected pods within a namespace. Best of all, Network Policies don’t conflict. They are additive in that admins can restrict what’s allowed by the union of multiple policies’ ingress/egress rules on selected pods without having to worry about the order of evaluation.

Security contexts

Aside from limiting the types of sources with which pods can communicate, admins can further harden their microservices using Kubernetes by limiting the privileges and access control settings for a pod or a container. That’s the purpose behind specifying a security context.

Organizations can do this using OPA Gatekeeper. This customizable webhook enables admins to enforce security policies in their Kubernetes environments through configurations, not code. With Gatekeeper, they can mandate that all container images come from approved repositories, for instance, and that all pods have resource limits, among other specifications.

Secrets management

Lastly, admins can use Kubernetes to store and manage passwords, OAuth tokens and ssh keys within a Secret. This method of storage is more secure than storing sensitive information in a pod definition or a container image.

That being said, it’s important for admins realize that Kubernetes Secrets are stored as unencrypted base64-encoded strings by default. This means that someone with API access or access to etcd could view those secrets in plaintext. In response, admins can consider enabling encryption at rest as well as Role-Based Access Controls (RBAC) for their organization’s Secrets.

Part of the larger CKS knowledgebase

Using Network Policies, Gatekeeper and secrets management to protect their microservices is just one of the things that candidates can learn by becoming a Certified Kubernetes Specialist (CKS). They can also learn how to set up a cluster and harden the system, among other topics.

Here’s StackRox with more information about CKS:

The CKS is the third Kubernetes based certification backed by the Cloud Native Computing Foundation (CNCF). CKS will join the existing Certified Kubernetes Administrator (CKA) and Certified Kubernetes Application Developer (CKAD) programs. All three certifications are online, proctored, performance-based exams that will require solving multiple Kubernetes security tasks from the command line…. The CKS focuses specifically on Kubernetes’ security-based features such as role-based access control (RBAC) and network policies and utilizing existing Kubernetes functionality to secure your clusters.

For more information about CKS, please visit CNCF’s website here.

Share this with others


Get price Free trial