Red Hat blog
More than a decade ago, the launch of public cloud services was the beginning of a seismic shift in the foundation of IT architectures. It was a natural evolution - the first SaaS services started in the late 1990s and virtual machines (which had been around since the 1970s) began growing in popularity for optimizing physical server resources. But virtual machines and even SaaS still had the central core of a traditional IT infrastructure, building on physical systems in server rooms and data centers.
Public cloud introduced a new, decentralized architecture with built-in services and self-service catalogs. The adoption has changed the nature of IT infrastructure. Red Hat's 2021 Global Tech Outlook report shows that a majority of our customers have a cloud strategy that involves one or more public clouds and 17% that have a private cloud strategy.
But cloud services introduce other challenges for system administrators (and for IT leaders planning projects or trying to manage budgets) because the very thing that makes cloud so easy to adopt makes it very difficult to manage.
The advent of cloud
When the first public cloud was introduced in 2006, we began to see major changes to the way companies did IT. Applications could be developed and deployed on resources completely outside an organization’s physical infrastructure, and at a speed and scale that was unthinkable with trying to purchase and provision systems in a data center.
One aspect of public cloud (and now with containers) is that many of the considerations and planning that went into physical infrastructure are simply abstracted away. A lot of physical infrastructure capabilities like networking and storage can be handled natively within the cloud. In addition, the immediate ease of use and simplicity of configuration can make it easy to shift new projects into cloud environments.
But the ease of deployment and easy access to native services is a double-edged sword. Some of the attributes that make it easy to deploy new instances also make it challenging to manage because of the lack of centralized control, like configuring user and service authentication across different clouds or managing data access and security. Even basic administrative tasks like patching become much more difficult because of the distributed, complex architecture of cloud.
There are several reasons why this complexity exists, but some of the common themes include:
Provider-specific tooling, which makes it difficult to manage across cloud providers consistently.
Problems migrating workloads to the cloud or between cloud instances
Identifying and managing resource utilization within clouds, including unused resources
Managing base images
Implementing security policies, from user access to applying CVEs to vulnerable systems to instance security settings
One of the old school lessons from data centers is still applicable in a cloud environment: standardization.
Standardize and simplify
Distributed computing environments are inherently complex; reducing complexity where possible will increase operational efficiency, improve security implementation, and optimize your resource utilization (which can help control your cloud spending).
Standard operating environments (SOE) have three primary goals:
Simplify maintenance processes.
Automate common management tasks.
SOEs usually define the operating system, required configuration, and applications or services in an image used for mass deployment in multiple infrastructures such as virtual machines or the cloud (or even containers).
Using an SOE allows you to take advantage of the built-in tools for cloud providers while also allowing you to use outside management tools (such as Red Hat Insights), monitor configuration drift, and scan for vulnerabilities and apply fixes redeploy as necessary. And it can do this consistently, reliably, in a way that allows your IT teams to scale.
Standardization can be a strategy for cloud adoption because the strengths of an SOE help mitigate cloud management challenges.
Take an intentional approach to cloud
We talk a lot about developing a hybrid cloud strategy, and it can be helpful to say what that strategy looks like. Basically, it means taking an intentional approach to add in new cloud environments with clearly defined requirements and outcomes.
One of the great strengths of the cloud is flexibility - but having a haphazard approach to configuring instances and deploying workloads can make those cloud environments more brittle and difficult to manage because of problems with user and data management and configuration, and compliance issues.
Standardization helps your cloud infrastructure become more flexible, which can allow you to make changes and pivot rapidly with less downtime or service interruptions.
As your infrastructure grows beyond cloud into the edge, artificial intelligence, and other new technologies, having that stable, flexible foundation is critical.
Standardization is an old-school approach, but it can be the right cloud strategy for your organization.
About the author
Deon Ballard is a product marketing manager focusing on customer experience, adoption, and renewals for Red Hat Enterprise Linux. Red Hat Enterprise Linux is the foundation for open hybrid cloud. In previous roles at Red Hat, Ballard has been a technical writer, doc lead, and content strategist for technical documentation, specializing in security technologies such as NSS, LDAP, certificate management, and authentication / authorization, as well as cloud and management. She also wrote and edited the Middleware Blog for Red Hat and led portfolio solution marketing for integration and business automation.