Skip to main content

5 tips for architects on managing cloud service provider spending

Want a clear idea of what to expect from your cloud service provider bills? Consider taking these five practical steps in advance.
Coins in a jar

Photo by Miguel Á. Padriñá on Pexels

One of the most common questions about moving to the cloud is what it'll cost. There's a perception that hardware you own has an absolute predictable cost, while the cloud environment is by definition malleable. A web application might be running in one pod or instance one moment—and five pods the next. The greatest strengths of the cloud are also the most intimidating: the ability to scale, and the knowledge that the resources to scale also carry a price.

The point I often make is that the cost to scale is more or less constant. You can choose to pay that price on demand, specifically for the duration of time that a new worker node is required, or you can choose to pay it all at once at the end of the year, just before your budget is up for review. The important thing is to understand the factors involved. For hardware in your server room, you probably already know the costs: there's the hardware, the drives, the cables, racks, environmental control, and so on.

For the cloud, it's mostly virtual, and it's often dependent upon your cloud service provider and what platform you choose to run. Some options provide more control than others, but here are five things you should do in advance so you have a clear idea of what to expect.

1. Estimate costs

The best way to get an idea of what you can expect to spend is to sit down and look at what services you're currently using or what services you intend to implement. Get a total of your current resources in terms of CPUs, memory, and storage usage. You can convert these components into cloud resources according to how each cloud service provider bills.

For example, Red Hat OpenShift on AWS (ROSA) estimates that 1 CPU core is the equivalent of 2 vCPU in terms of an EC2 instance. EC2 instances come in different "sizes" (M5.xlarge, M5.large, and so on), and each size has a very specific profile for vCPU, memory, storage options, and bandwidth. An M5.large, for instance, is currently 4 vCPU and 8GB RAM, while an M5.xlarge is 8 vCPU and 16 GB RAM.

Knowing the profiles of worker nodes available from a cloud service provider means you can map your current or expected usage to cloud services with actual price tags. ROSA, for example, calculates its subscription fees per 4 vCPU, so you can estimate what you'll be spending on average based on your expected number of active worker nodes.

Some resources are separate line items from worker nodes. For instance, you may pay separately for the control plane that manages the worker nodes, or for load balancers, storage service, Network Address Translation (NAT), and so on.

Bottom line: Understand what you're introducing or migrating to the cloud, and map those components to billable items from your cloud service provider.

[ What should you know about  AI/ML workloads and the cloud? Get the eBook: Top considerations for building a production-ready AI/ML environment. ]

2. Know your environment

A literal overlay of your current environment onto the cloud isn't always the best way to stretch your dollar. The point of the cloud is that it can use as little or as much of its available resources as necessary to ensure an optimal experience for users. Your current setup probably isn't doing that, so part of maximizing your budget is understanding what you can downsize and what will benefit by being able to scale

How you divide the work is up to you, so having familiarity with the services you're going to run is significant. For instance, just because an application happens to be running on a dedicated server with 16 physical cores and 32GB RAM doesn't mean it needs a baseline of 8 vCPU. Monitor the server or look at the logs, and determine how that application "fits" into its physical server before you assign it to an instance of a specific capability and price. The cloud can scale up. Take advantage of that.

If you're not migrating anything and just looking to start fresh on the cloud, then find out from your cloud service provider what the typical minimum setup is. A common minimum ROSA deployment, for instance, consists of three control planes, at least three worker nodes, two load balancers, and two storage (S3) buckets. This may vary from provider to provider, but three tends to be a magic number, as it helps avoid split brain, and certainly redundancy in all things is a must.

Bottom line: Don't just copy your current setup to the cloud. Play to the cloud's strengths, and know what your environment actually requires.

[ You might also be interested in reading Migrating 3,000 applications from another cloud platform to Kubernetes: Keys to success. ]

3. Find the variables

No matter how exciting the prospect, dynamic scaling is also scary. What if your application becomes too popular? What if all the traffic you get ends up costing you more than what you expected?

If you're very concerned about fluctuation in demand, look into the options with flexible payment options. ROSA features an annual cost option for its subscription fee, allowing you to pay roughly a flat fee for worker nodes over the course of a year rather than paying per EC2 instance. As with any prepaid plan, it's a calculated gamble. Should your traffic underperform, then it may feel that you've paid more than you'd have paid if traffic had been high.

Bottom line: There's great comfort in predictability. Look into contract options to make your bill more consistent.

4. Turn off the lights when you leave

When you stop (or "hibernate," in ROSA terminology) an EC2 instance, you don't pay for it. That may not be an option for you if you're providing an always-available service, but for development or research teams, it can be a significant cost-cutting technique. You can also stop the entire cluster.

Bottom Line: The modern internet is always on, and local infrastructure is always available, but the cloud is a throwback in some ways to the time-sharing systems of old. Get your teams to treat it like a resource.

5. Test locally

Should your teams need an always-available test environment, you can provide it to them. AWS features PrivateLink connections for internal testing, and that's available with OpenShift on AWS. Have your development and R&D teams utilize non-public instances whenever possible.

Using open source across the board means you can maximize the flexibility of the hybrid cloud. You can have developers mirror the cloud environment on their local machine, pushing to production only when you're ready for billing to kick in. You can also run local services on premises, minimizing billable nodes on your public cloud.

The elusive fixed expense

Don't let the cost of the cloud get foggy. Understand your infrastructure, understand what's billable from your cloud service provider, and find pragmatic ways to eliminate variation and waste. You'll develop a set of informed numbers that approximate costs, and as with everything else, you'll be able to learn and adjust those numbers as you gain experience on your new platform. Leverage the advantages of open source, proven technologies, and good old-fashioned common sense, and as always, do your homework.

[ Learn how to simplify hybrid cloud operations with Red Hat and AWS. ]

Topics:   Cloud   OpenShift   Kubernetes   Operations  
Author’s photo

Seth Kenlon

Seth Kenlon is a Unix geek and free software enthusiast. More about me

Navigate the shifting technology landscape. Read An architect's guide to multicloud infrastructure.


Privacy Statement