Kubernetes, containers and highly scalable cloud services are the modern elements of business software success. But making the jump to Kubernetes requires training, understanding and a good deal of work from developers, architects and systems administrators alike.
To save time, speed up development cycles and limit organizational agony, it often makes sense to choose a managed Kubernetes offering, rather than running your own. If that’s a decision you’re considering, here are five things you should know.
1. It’s the application platform you don’t have to manage
Kubernetes is no longer the newest, hottest thing. Instead, it’s table stakes. The same way that Linux took over the server market in the early 2000’s, Kubernetes has dominated the cloud infrastructure conversation for some time now. Linux containers running at scale alongside in-cloud services, monitoring and lifecycle management is simply too compelling of a value to pass up for modern developers and administrators who need scale and reliability.
That popularity, however, does come with some complexity. Because Kubernetes can now “do anything,” it can be tricky to simply get it up, running and doing something. There are a great many options, configurations and hosts out there, and they all come with their own set of needs and requirements, just like any software.
While there are many reasons to run your own Kubernetes, and even reasons to build your own application platform, at the end of the day everything at this layer of the stack is purely pavement. While Kubernetes enables innovation velocity to increase, it’s a benefit that comes from the platform as a whole, with all the key layered components working in harmony with the users’ needs. These environments need to be available quickly and with greater security postures in place.
And anyone who’s ever had to set up and maintain test, build and production environments for hordes of developers knows that this is no small task. It can take a whole team of people to keep systems running and in sync with developer needs, and that’s just at the application level.
Traditional IT departments often required weeks if not months to provision such equipment. But the power and speed of the automated cloud can shorten this process to hours, or even minutes.
For some enterprises that don’t have extra IT cycles to spare, Red Hat OpenShift Service on Amazon Web Services (ROSA) provides a higher class of service, support and hosting, along with enhanced security capabilities. With Red Hat’s expert site reliability engineers (SREs) running your infrastructure, your team can get down to business and focus on its applications, instead of spending time standing up and installing systems just so they can start building on top of those clusters.
2. ROSA provides automation and management (from Red Hat SREs) for tasks so you don’t have to do them
The measure of a good systems administrator is their workload. If they are doing a lot of work by hand, all the time, they are perhaps missing the point. Even in the 90’s, a great sysadmin was measured by the strength of their scripts. Today, we have a wealth of tools to better handle such automation, rather than simply resorting to the reliable old shell script.
With tools like Ansible, Argo CD and Git, nearly any workflow can be automated into a pipeline that does the work for developers without hiccups and admin involvement. But that level of automation takes time to build and customize, just like any infrastructure.
For teams that aren’t yet specialized in the new “cloudy” way of building software, and even for teams that would rather spend their cycles away from infrastructure, Red Hat OpenShift Service on AWS features helpful automation tools and processes to help speed up the process and essentially make the platform “boring.”
Instead of building your development pipelines from scratch, ROSA can tie in your existing systems and tools, such as GitHub and Ansible. Even better, our SREs have constructed patterns for your developers to use, with pre-configured versions of essential infrastructure ready to go at the drop of a hat.
This extends beyond Red Hat OpenShift on AWS, as the open hybrid cloud model offered by Red Hat OpenShift gives you the flexibility to construct across other cloud providers as well as on-site. For example, you can run backups automatically across multiple locations and cloud providers, around the globe, using the services and systems you already use at a higher scale.
Throw in serverless, Quarkus for Java and Kubernetes Operators, and you’ve got a globe spanning, cloud-crossing, fully automated space for your developers to access and use, all with guard rails and strengthened security provisioning built in.
And with less work to do by hand, administrators can focus on the future-looking innovative work they want to do, instead of on the legacy triage and technical debt arbitration they probably don’t enjoy doing.
3. Integrated support from Red Hat and AWS with access to the open source developers who built these tools
Have you ever encountered a bug that was not documented online, had an infinite number of probable causes and was stopping work entirely? Even encountering a single one of these traits can cause an entire IT team to become frustrated and behind schedule.
When critical bugs and errors occur, it’s essential to handle them with something akin to the Way of the Samurai: a focused, disciplined approach based on immediacy, precision and the preservation of the interests of one’s employer. It is also better to have an army than a squad.
And Red Hat is one of the largest bug squashing armies out there. It’s very, very hard for bugs to hide from the watchful eye of our service and support team. Here’s an example of a time we sprang into action on an open source bug.
This is because when you ask us for help with a bug, we’re going to start evaluating your problem, replicating it and ruling out huge portions of the stack. We generally contribute ~5% of the code in any given Linux kernel release.
That may not sound like much, but the top contributors can fluctuate over time as new aspects are added to the kernel around embedded systems and mobile devices. Red Hat is very focused on the core of Linux, and as such, we’re one of the largest maintainers and signers of contributed code to the Linux kernel. When code is contributed to the Linux kernel, we review it and test it, even when we don’t write it. We do some of the grunt work, the day-to-day nitty gritty of spending cluster cycles to test and secure these open source projects. And we contribute those findings back upstream.
The same goes for Kubernetes, where we’re the second largest corporate code contributor to the project. We help to drive a lot of the future plans for the Kubernetes community, and when those plans stretch out a few years, we try to plug the holes with temporary open source solutions while ushering along the long term architectural changes that are usually needed to address systemic issues.
That’s a lot of explanation to say this: ROSA comes with integrated Red Hat and AWS support. If you encounter a real puzzler of a bug, you’ll be working with Linux kernel contributors, Kubernetes contributors and SRE experts who use these systems at scale every day. They know the software, and when you need help, they’ll be there, all the way from the bottom of the stack, right up to where your application boots.
4. It’s more than Kubernetes, it’s a complete application platform
Your developers need all sorts of…things. Databases, registries, repositories, load balancers, firewalls, virtual machines, message queues and pizza. While ROSA doesn’t come with pizza, it does include the rest of these basic building blocks. With the push of a button, you can deploy entire application environments targeted at your developers’ needs.
Are your teams building APIs that tie directly to your company’s bottom line? Red Hat OpenShift API Management offers enterprise-grade API management built on open source, so your developers can hit the ground running and build more revenue-earning APIs more quickly.
Are your teams trying to ingest huge amounts of data asynchronously? Red Hat OpenShift Streams for Apache Kafka offers a managed message queue, run by our SREs, but containing your data and providing your developers with a constant pipeline of data for processing.
Or maybe your developers are attempting to build a brain. Training AI/ML algorithms requires large sets of data, large amounts of processing for training, and easily accessible systems for data scientists to manage both. For them, we have Red Hat OpenShift Data Science.
And, of course, ROSA is fully integrated with all of those AWS services you’ve come to rely upon. With 170+ service integrations, there are plenty of things administrators can take advantage of to simplify their day-to-day workload and to provide developers with a faster and easier to use Kubernetes experience.
5. It allows you to focus on your apps instead of on Kubernetes
The previous four reasons for adopting ROSA can all be summed up in this final reason. As an administrator, the day-to-day work of setting up systems and services, putting out fires and planning for the future has certainly become more challenging in recent years. As businesses rely more heavily upon their software developers to create innovation and earn revenues, it is the enablers of those developers who have run in front of the freight train building the track.
Kubernetes was a major step forward for systems management, enabling administrators to focus their attention on herds of systems, rather than on single servers. This brought with it, however, an exponential growth in complexity of management, which is addressed through the thousands of tools and services available to layer on top of Kubernetes.
Just like a modern administrator wouldn’t spend time twiddling a single container, they also shouldn’t spend time fiddling with a single service. That fussy container should be killed and restarted from the repository. And that tricky service should, perhaps, simply be run by experts providing the cluster as a service, itself. Thus, administrators can focus on enabling their developers instead of on learning one small aspect of Kubernetes.
Given the chance, developers can do amazing things with a fully automated cluster. They can spin up a new database without causing terror in the hearts of administrators, who have already set unbreakable guidelines for these deployments through Kubernetes Operators. Developers can create entire environments on-demand to test theories and new software, all safely isolated from the rest of the world. And most importantly, they can innovate without opening a support ticket.