The Rise Of Kubernetes and PaaS: What It Means for Cost Allocation

Deploying to the cloud can be complicated. The average cloud deployment relies on a veritable forest of tools to keep everything running smoothly.

Access to this diverse ecosystem of capabilities has enabled some incredible software engineering. In particular, Kubernetes and platform-as-a-service (PaaS) solutions simplify cloud deployments. However, they present challenges for analyzing and allocating costs.

PaaS and Kubernetes create shared environments, which dynamically allocate resources based on demand. While this is an important capability, it can quickly obscure actual resource costs. We’ll explore the differences between Kubernetes and PasS, their cost allocation challenges, and how Yotascale helps you overcome these challenges to understand your cloud spend.

Tracking Kubernetes and PaaS Costs

While Kubernetes and PaaS solutions may seem quite similar, they each have their benefits and challenges — particularly related to cost analysis.

Compared to a traditional all-inclusive PaaS, Kubernetes is a more “DIY” in-house solution. PaaS handles all cloud infrastructure at the service level, while Kubernetes operates instead at the container level. Developers are free to build just about anything with Kubernetes. They’re no longer limited by whatever services our PaaS offers. In some instances, Kubernetes complements rather than replaces cloud deployments that rely on PaaS solutions.

Regardless of how we’re using Kubernetes, we need a way to track its costs.

Since PaaS provides a complete solution for deploying and operating on the cloud, the service provider is often keen to handle certain aspects of cost tracking. Our PaaS can automate resource allocation as needed and is always happy to send us a bill for those services.

Kubernetes, in contrast, requires developers to operate with more awareness of the resources they request. It places an extra layer of abstraction on our resource use’s actual cost. This abstraction sets the responsibility of allocating resources and tracking usage squarely on developers. Since it’s a more hands-on solution, Kubernetes requires some extra diligence in-house to ensure someone tracks spending effectively.

While a typical PaaS might offer a more detailed up-front view of costs, it’s also easy to lose track of which teams and services are responsible for using a resource. Without some system of tagging, locating the responsible parties from a swamp of log files can be dizzying.

No matter which solution we pick, it’s vital to establish a method for determining the underlying resource use costs and tracing those costs back to individual teams, services, and clusters.

Zeroing in on Kubernetes Costs

To ensure we’re effectively tracking and allocating resources to our Kubernetes clusters, we need to ask where our Kubernetes costs are coming from in the first place. Kubernetes creates shared environments that cultivate efficient DevOps workflow but evaluating the cost of resources consumed in these common areas can be challenging.

With Kubernetes, the question “how much does this container cost?” may not be granular enough. Over its lifetime, teams may use a single container in development, testing, and production environments. Various groups may interact with the container and allocate resources differently based on their needs.

To perform an adequate cost analysis, we need to track our Kubernetes cluster resource use throughout their entire lifecycle. We need to pay careful attention to what happens at each stage.

Detecting Cost Spikes

One of Kubernetes’ most attractive features is its dynamic ability to scale up and down to respond to load automatically. Kubernetes helps ensure our apps are always ready to respond to traffic demands by automatically provisioning and removing containers as needed. However, this dynamic scaling can have severe implications for cost allocation.

We need to understand our system’s requirements to keep costs predictable and within budget. DevOps teams sometimes overprovision resources or fail to set reasonable ceilings for their services’ demands.

Overprovisioning and lazy resource management are a surefire recipe for cost spikes, but they are far from the only cause. Any time our containers’ demands for resources outstrip our allocation estimates, we find a cost spike.

We should investigate cost spikes with as much care and scrutiny as a system outage. Many questions present themselves, including:

  • What team was responsible for the container when the cost spike occurred?
  • What demands on the system led to the resource use?
  • What service(s) contributed most to the cost spike?

Defining Kubernetes Costs

Whether investigating a cost spike or simply trying to understand our Kubernetes spend, it’s essential to look at what is occurring on both micro and macro scales. Ultimately, our Kubernetes cluster costs come down to their underlying resource use. To perform an adequate cost analysis, we need to determine the used resources’ actual cost and trace that resource use to individual services (and teams).

Kubernetes provides a suite of valuable tools to monitor resource use over time. But, the duty falls on developers to create an effective method of associating resource use with cost. They must carefully track allocation versus use to fine-tune resource budgeting.

Charging Back to Each Team

One of the most effective ways of getting developers to take responsibility for their resource use is by simply making them aware. When we provide each team with an internal chargeback “bill” for their resource use, developers can track their resource costs as an important KPI. Some developers even gamify their chargeback bills, encouraging a culture of minimizing resource costs.

For accurate Kubernetes chargebacks, we need to carefully tag and monitor container resource use as these containers make their way through the DevOps lifecycle. While Kubernetes’ API provides some out-of-the-box tools that help developers do this, setting manual quotas and tagging everything by hand becomes tedious quickly — not to mention expensive, as we pay skilled developers to do this.

The data available from Kubernetes’ API is quite useful, but developers need more advanced visibility tools to scale and use this data effectively. Developers also need a way to streamline the process of granularly tracking resource use across clusters and containers.

Improving Cost Visibility with Yotascale

Yotascale helps DevOps manage Kubernetes costs through a system of detailed automated tagging. These detailed tags enable users to explore both the big picture and the microscopic view of resource use.

Yotascale’s tagging helps track Kubernetes’ resource use at four distinct detail levels (namespace, Pod, Deployment, and label). Together, these tagging levels provide an extremely detailed view of what teams use resources and where and the container’s DevOps lifecycle stage. In turn, you can create chargeback bills for the individual teams collaborating on your clusters.

Rather than burying resource use and cost details in a labyrinth of logs and reports, Yotascale places our Kubernetes resource management into a rich interface. Developers can explore the interface to take a deep dive into resource consumption.

Want to see exactly how your team is using your Kubernetes resources? Sign up for a 14-day free trial of Yotascale to get tagging and saving.

If you’re interested in developing expert technical content that performs, let’s have a conversation today.



If you work in a tech space and aren’t sure if we cover you, hit the button below to get in touch with us. Tell us a little about your content goals or your project, and we’ll reach back within 2 business days. 

Share via
Copy link
Powered by Social Snap