Skip to content
Jimmy MestaMar 7, 2024 3:43:54 PM9 min read

Kubernetes Administration: Cloud Native Security Basics Part V

Part I: Deployment and Container Orchestration
Part II: Container Security
Part III: Container Deployment
Part IV: Introduction to Kubernetes
Part V: Kubernetes Administration
Part VI: Kubernetes Security Checklist

Intro

In the previous post in this series, we gave an introduction to Kubernetes, including information about its history and basic Kubernetes architecture. In order to move further to discussing how to effectively secure Kubernetes, we must dive into Kubernetes administration and some of the associated risks.

 

The Basics of Kubernetes Administration

To begin deploying containers successfully on a cluster, you must first establish a means of connecting to the API server that serves as the cluster's "gateway." Common administration tasks include deploying new objects (such as pods or services), scaling the number of pods, debugging applications, deleting services, and creating secrets. Kubernetes is built to be flexible by design and supports a wide variety of mechanisms for administration. Since everything in Kubernetes is API-driven, it is possible to programmatically interact with Kubernetes, and many popular languages support libraries for Kubernetes. Let’s look at a few common methods and components used to interact with Kubernetes. This is also covered, albeit it more briefly, in this list of prerequisites for securing Kubernetes.

 

Kubectl

The most common way to interact with a Kubernetes cluster is through the kubectl command line interface. Kubectl gives cluster administrators a user-friendly way to interact directly with a cluster. Nearly all administrative tasks associated with deploying containers in a cluster can be carried out using kubectl. Here are some example commands:

 

Create a deployment object type using the my-deployment: 

  • kubectl create deployment -f my-deployment.yaml

 

Print a list of the pods running in the cluster:

  • kubectl get pods

 

Print details about a particular pod to the terminal:

  • kubectl describe pod <podName>

 

Scale the pod count of the pod object:

  • kubectl scale --replicas=3 -f my-pod.yaml

 

API server

Being an API-driven interface allows for direct access to the API objects. If we looked under the hood of the kubectl command-line interface, we would see that it is just making complex API calls to the API server running on the Kubernetes master. It is possible to interact directly with the API server by submitting appropriately formed REST requests. This is helpful for automated tools or systems that may not have kubectl or for more advanced interaction with the cluster. The API should always be configured to require strong authentication.

 

Kubernetes dashboard

The Kubernetes dashboard is a web-accessible interface that allows users to carry out similar actions to kubectl without the command line. It is capable of performing basic cluster management tasks such as deploying, scaling, and deleting applications; troubleshooting; and reviewing metrics. 

The dashboard is a useful tool for visualizing a Kubernetes cluster, but it can also be a juicy target for malicious users. Without the appropriate access controls in place, an unauthorized user may be able to perform administrative tasks on a cluster. This obviously can have serious ramifications.

Make sure you allow only authenticated access to the dashboard. The dashboard service account should have limited access, as users can practically skip the login screen and use the dashboard with the service account. Additionally, you should not expose the dashboard to the Internet.

The Kubernetes commercial and open source ecosystem has a wide range of third-party dashboards and administrative interfaces that should not be trusted unless vetted by the security team.

Each of these administration mechanisms is built for particular scenarios and requires the appropriate application of security controls. Kubernetes is not secure by default.

 

Kubernetes networking

Networking in Kubernetes is complex and extremely extensible. Here we'll take a high-level look in order to establish a baseline understanding. In a cluster, there are a number of components that all need to communicate with one another to form a usable system. In a traditional web application architecture, this would look something like this:

external load balancer → web proxy such as nginx → application server → database server

In Kubernetes, we can map these traditional server-oriented mechanisms mainly to pods, services, ingress components, and load balancers.

 

Container-to-Container

If multiple containers exist within a pod, they communicate over localhost. Multi-container pods are tightly coupled and should only be used if the containers require shared network and storage.

 

Pod-to-Pod

Each pod in a cluster is assigned an IP in a shared networking namespace. All containers in a pod have the same IP address. This forms a networking model where each pod can communicate with the network (similar to how virtual machines communicate). Pods in the same cluster can connect directly to each other without network address translator (NAT). In the next post in the series, we will see how network policies can be used to prevent a pod from being able to communicate with another pod.

 

Pod-to-Service

As discussed earlier, services may have any number of pods associated with them. Each service is assigned a client-accessible IP address, and the traffic routed to a pod within that service is proxied through the service IP address, ultimately ending up at the destination pod. Services are a form of NAT and actually just an IP address used only for routing purposes.

 

External-to-Service

We often need to expose certain internal components of our cluster to the outside world. For example, if we build an API that is to communicate with a mobile application, we need to tell Kubernetes to route traffic to that API service and then to the pods. In Kubernetes, we can define a LoadBalancer type in our service definition; this will automatically provision a cloud load balancer depending on your cloud provider and configuration.

Another option for exposing a service to the external world is to use an Ingress. Ingress is an API object that manages external access to the services in a cluster, typically HTTP. This operates at the HTTP layer (Layer 7) versus the LoadBalancer service type, which typically only provides network-based load balancing. Ingress can provide load balancing, SSL termination, and name-based virtual hosting.

Using a previous example, we can see that our web-frontend will create an external load balancer by defining it in the type stanza.

  • kind: Service
  • apiVersion: v1
  • metadata:
  • name: web-frontend
  • spec:
  • ports:
  • - name: http
  • port: 80
  • targetPort: 3000
  • protocol: TCP
  • selector:

 

The Kubernetes lifecycle

 

At this point, it’s worth diving briefly into the lifecycle of the Kubernetes workload. Kubernetes defines three basic phases for a container: Waiting, Running, and Terminated. The status of the containers in a pod can be seen by using the kubectl describe pod pod-name command. In the most basic case— a single container pod— the container phases map pretty easily to pod lifecycle phases. The pod lifecycle begins with a Pending phase— in a single-container pod this essentially encompasses the Waiting phase of the container lifecycle. Both pods and containers have a Running phase, and in a single-container pod these are essentially equivalent (we’ll get to some nuance here in a moment). 

At first glance, perhaps the most important distinction between a Pod and a container is that Pods can contain more than one container, including init-containers. The biggest consequence of this for our purposes is around defining the Running phase of the Pod lifecycle. It turns out that the Running phase for a pod does not indicate that the container is up and running, but rather that at least one of the containers has made it at least as far as the Running phase of the container lifecycle. Based on the importance of pods in all the administrative elements mentioned thus far, it’s important to understand that you will be dealing with pod-related administration in the constant context of this lifecycle.

 

Kubernetes consoles exposed to the Internet

Kubernetes is a new and ever- evolving technology. Even sophisticated organizations fall victim to simple misconfiguration errors, which can lead to large-scale fallout.

In 2022, Cyble Research Labs uncovered more than 900,000 misconfigured Kubernetes clusters. These Kubernetes console of those clusters was exposed on the Internet, which means that potentially it can be accessed by anyone.

An exposed Kubernetes console can lead to significant security incidents, like the one that Tesla suffered from in 2018, where attackers took control of their Kubernetes console, eventually exposing sensitive information stored in Teslaʼs AWS S3 buckets.

The Kubernetes console should be hardened (Kubernetes is generally misconfigured out of the box) by allowing access only to authorized users, using strong passwords, disabling unnecessary ports and ensuring it is patched and up to date. Of course, you will be better off by not exposing it to the public Internet and disallowing external traffic.

Misconfigurations happen. Some, such as Tesla's Kubernetes dashboard exposure, have grave consequences. Containers and orchestration tools are rather new and rapidly evolving. In most cases you will be better off keeping the Kubernetes administrative console isolated from the Internet. It is best to understand the technologies you are deploying and always verify that the controls that you think exist, actually do exist.



A Cryptocurrency Miner Found on Internal Kubernetes Cluster

In 2019, JW Player Engineering team discovered a Kubernetes node that was part of an internal cluster reporting higher than normal average CPU. Upon investigation, the team noticed that a cryptocurrency miner was running on the host itself, not as a container. How could something like this happen to what was thought to be an internal use cluster?

During the incident response process, the team noticed that an exposed dashboard from an administrative tool called Weave Scope was assigned a public IP address through an AWS ELB. This public IP served a powerful dashboard and required no authentication.

Attackers discovered an in-browser terminal available on the dashboard and were able to quickly escape from the container context to become root on the host itself. This is where the cryptocurrency miner was dropped.

The container in question ran as a Privileged pod in the cluster and as root user – two very dangerous configurations that can quickly lead to a container breakout scenario. Always ensure your cluster is hardened and privileges are minimized. If possible, do not permit deployments to create load balancers automatically in your cloud environment.

 

Dero Cryptocurrency Miner

The Dero miner also took advantage of Kubernetes defaults. The attacks start with the threat actors scanning exposed, vulnerable Kubernetes clusters with authentication set to --anonymous-auth=true, allowing anyone anonymous access to the Kubernetes API. This is just one way into a Kubernetes cluster. While KSOC can detect clusters that allow anonymous authentication and warn administrators, Kubernetes defaults to --anonymous-auth=false and managed Kubernetes (e.g. EKS, AKS, GKE) will also not allow anonymous authentication. 

 

Conclusion


Kubernetes administration is clearly an important factor to consider in Kubernetes security, as Kubernetes is generally misconfigured by default and things like the API Server have broad access over your clusters. In the next post, we will dive directly into an ultimate checklist for Kubernetes security, having discussed the basic components of container security and Kubernetes.

avatar

Jimmy Mesta

Jimmy Mesta is the founder and Chief Technology Officer at RAD Security. He is responsible for the technological vision for the RAD Security platform. A veteran security engineering leader focused on building cloud-native security solutions, Jimmy has held various leadership positions with enterprises navigating the growth of cloud services and containerization. Previously, Jimmy was an independent consultant focused on building large-scale cloud security programs, delivering technical security training, producing research and securing some of the largest containerized environments in the world.

RELATED ARTICLES