Skip to content
bg blog
Jimmy MestaSep 23, 2022 9:00:00 AM4 min read

How To Keep Track Of What’s Running In Your Kubernetes Cluster

Kubernetes for many organizations is this giant box of mystery. Engineers sort of know what’s running, but at the same time, they use a third-party tool and before they know it, there’s a service account running the third-party tool with full-blown admin access to every Kubernetes resource inside of the cluster.

Other than proper security, the second most important piece of the Kubernetes pie is to understand what’s running in a cluster and how to keep track of what’s running.

In this blog post, you’ll learn about three key ways to properly manage the state of a Kubernetes cluster.

 

Creating Proper Policies

If you’re installing third-party tools/platforms, which you most likely are (ArgoCD, Rancher, Prometheus, Grafana, etc.), that means with those tools, comes a service account. As with all service accounts, roles and rolebindings are going to be in-place. You will need to have proper policies in place to ensure that what’s installed and modified is in fact supposed to be there and supposed to have the level of access that it has.

For example, let’s say you’re installing a third-party tool that will do and need to do the following:

  • Scan the Kubernetes cluster to see the resources
  • Connect to some of those resources when needed
  • Have it’s own dedicated namespace
  • Install configurations to Kubernetes resources when needed

What was explained above is the high level of many tools and platforms for Kubernetes, but let’s think of one - GitOps.

With proper policies in place, a GitOps solution will be the proper access to deploy certain types of Kubernetes resources, be able to modify certain Kubernetes resources, and have a little bit of control of the cluster.

Without proper policies, most GitOps solutions will have full reign of a cluster. It’s pretty much like giving root/admin access to a third-party utility.

The first thing you need to do when coming up with a game plan with your team for how to keep track of what’s being installed in your cluster is to understand what you want allowed to be installed inside of your cluster.

Deploy With Automation

Deploying without automation looks something like this at a high level:

  • SSH into the server
  • Install some third-party tool or package
  • Exit the server

Before you know it, there’s a package running inside of a cluster without it being documented, properly managed, or handled with any repeatability processes.

The best thing that you can do to keep track of what’s happening inside of a Kubernetes cluster is to ensure that it’s all properly accounted for, and there’s no better way to do so than by automating your workloads.

Any time that you plan on installing a third-party platform/tool in Kubernetes, ensure that it’s in code and that code is inside of a git repo. That way, you can control what’s being installed, how it’s being installed, and how it can be removed.

Use KSOC

Yeah, this probably sounds a bit obvious - “use the tool we’re building”, but continue reading and you won’t be disappointed.

The idea behind KSOC is to be a security management hub for your Kubernetes clusters, and that means being a central location for you to see and understand what’s happening inside of a Kubernetes cluster. That could be anything from:

  • Security concerns/findings
  • Rules/policies
  • Workloads

With KSOC, even if you aren’t fully implementing the security suite, you can see Deployments, DaemonSets, Services, and any other high-level Controller Kubernetes resource that’s running in your cluster.

This gives you the ability to see what workloads are running and give you a better understanding of what workloads shouldn’t be running, if any at all.

If you’re using a built-in method, like kubectl get deployments to see what’s running inside of your cluster, it can get pretty daunting and cumbersome. There are a lot of resources running and a lot of namespaces with resources running. Having one location that you can go to so you can see running workloads is crucial for a large environment.

Outside of seeing the workloads that are running in your Kubernetes cluster, which would be beneficial for Devs, there’s also a view from an Ops perspective. With KSOC, you can manage workloads across multiple clusters that are connected to your KSOC environment. That way, your cluster information is in one place and you don’t have to worry about logging into multiple UI’s to see what’s happening across your entire Kubernetes environment.

Closing Thoughts

Figuring out how to understand what’s running in your Kubernetes cluster is more of team/people/management problem than it is a technical problem. You have to first start with understanding how workloads are deployed, then move into how to manage/view those workloads. If Kubernetes resources are being deployed manually from every engineer's machine, it’s always going to be difficult to properly keep track of what’s running.

Want to chat? Schedule a demo with our team.

 

RELATED ARTICLES