Blog Red Hat
Today was the best of both worlds when it comes to open source: seeing new things from the community and the implementations of those community technologies working together in enterprises. These talks spanned everything from microservices to analytics, automation, and collaboration.
Christian Posta in his session, “The hardest part about microservices is your data”
Data, data everywhere…
Christian Posta, principal architect at Red Hat, described the impact microservices are having on data. Smaller, separated services acting on data more quickly can have huge consistency implications.
First, Christian got deep with a philosophical question: What is the definition of a single thing? For example, what is a book? Is it the concept of a finished piece of writing? Is it the physical book? If there are multiple copies, are they all the same book? Different books? And if you have a book that comes in multiple volumes, is each volume a book or are they––collectively––a book? The point is these are the same kinds of things you need to consider when you think about your data. Is this data the same data as before? Did it change? When? Did I act on this data then or do I act on it now?
When we start breaking applications down into microservices, these questions become even more important. Data moves increasingly faster, over networks that introduce more latency than monolithic apps did in the past. When it comes to data, we need to plan for failures and build in the concepts of time delay and networks as first-class citizens. We also need consistency, but we expect these failures.
This is starting to sound like the CAP theorem––consistency, availability, and partition tolerance. The idea is that you get to choose 2 of these. But that’s not exactly accurate because you can’t sanely choose to not have partition tolerance. So, it’s a choice between consistency and availability.
Why not both?
To help handle all of this and give us what we want, we need to build a data system at the application layer. This is what internet companies did in the past to bring us data systems at scale. Frequently these solutions are not modular. And they can be quite bulky. Can we do things in a more open source way?
Debezium is open source. It’s a distributed platform designed to capture changes in data. It sits between your data and your apps (or microservices). Debezium enables your apps to react immediately when there’s a change. What’s more, it designed to act as a servant that reads database transaction logs and turns them into streams, publishing to a queue. Now you can act on changes in the order that they occurred. Now you know when now is. When then was.
May the remorse be NOT with you
Who likes to see multiple products work together? Who likes to see it done well? You in the back? Yeah? Well, you’re going to love the impressive capabilities of Red Hat® Insights paired with Red Hat Satellite and Ansible by Red Hat. Led by Red Hatters Paul Needle, principal technical account manager; William Nix, principal technical marketing manager; and Rich Jerrido, principal product manager; this session combined all of these pieces into a cohesive set of demos.
The technology required to keep up with competition and change is becoming increasingly more complex and can make it harder to achieve:
- Optimal performance.
- More consistent security.
Red Hat’s products, working together, can help your business manage complexity through insights and automation. This enables you to better predict problems, prescribe recommendations, prepare mitigation and remediation plans, and report progress on those plans once carried out. Paul even mentioned the newly announced Ansible features that enable automated remediation.
Rich then came up to give a real-world demo of automated deployment of Red Hat Insights via Red Hat Satellite. In Red Hat Satellite 6, Red Hat Insights is there by default––you just have to turn it on. He simply went into his host groups, found the Insights client, clicked a button, and it was configured automatically. The great thing is that you don’t have to manually install each module by hand. Anywhere you have Puppet, Insights is designed to reach. At the click of a button.
With Insights running, looking across our system we saw the system health and recommendations. And we could take action on those. William brought this to life in the next demo: an example of issue remediation. Insights showed us a few issues, all with likelihood and risk assessments. He picked a high-risk issue and drilled down into it. All of the systems affected were available and, again with the click of a button, he was able to kick off an Ansible playbook for remediating the issue. He ran the playbook manually through Ansible and patched the affected systems. Next, he did the same thing but with a different approach––instead using Ansible Tower by Red Hat to autodetect the new playbook and run it through the GUI, pointing out that you can automate this step. That might give some of you pause, because we’re talking about self-healing systems with little to no human intervention. It’s a different way of thinking, and those preferring total control might not be as keen. But it’s possible. And it can be easier.
The key takeaways:
- Insights can help you create playbooks to handle issues as they appear in the wild.
- Then you can more easily and quickly remediate affected pieces of your infrastructure––with little intervention.
- This helps you address vulnerability issues more quickly––which can help to reduce your business’ attack surface.
Keep running. Be happy.
Red Hat’s Justin Holmes and Øystein Bedin talking about Open Innovation Labs
Red Hat Open Innovation Labs: An answer to your DevOps problems
One of the last sessions of the Summit came in the form of metaphors. Justin Holmes, DevOps and Platform-as-a-Service (PaaS) architect at Red Hat Open Innovation Labs, and Øystein Bedin, software architect for Red Hat Consulting, started with: What do an award-winning restaurant and Open Innovation Labs have in common? When Red Hat launched Open Innovation Labs a little over a year ago, they knew they had to rethink how things work to help businesses become more nimble and successful.
Back to the restaurant metaphor. All good restaurants have great kitchens. This is an open, collaborative workspace that’s equipped with the best tools, built for speed and experimentation. Red Hat set out to make these spaces. The chefs of the Open Innovation Labs kitchens needed to possess a variety of leading skill sets, with a shared culture and passion, where everyone can be held accountable to do their part. The method should be built around speed and control with the ability to try, learn, and modify. Key to this is mentoring to help accelerate training. Finally, the meals relied on inventing new dishes and gathering feedback to create memorable experiences. This can be done by having the right space, with the right people, following the agreed-upon method––with real enthusiasm.
And that’s exactly what Open Innovation Labs is designed to do, working to open locations soon in Boston, London, and Singapore with the best minds Red Hat has to offer from across the organization.
Now for another metaphor: transportation. Open Innovation Labs works with customers in 3 major ways:
These are organizations looking to disrupt. They know where they want to go, and they want to get there as fast as possible. Open Innovation Labs helps them create demos that they can bring to their executives as proof for what the organization can do––to help change the mindset and move more quickly.
The oil tanker
These organizations can be large. They know where they want to go, but change course slowly. The transformation required to adopt new ways of thinking is like steering an oil tanker. A little bit at a time will get you there. Red Hat can help them do this 1 team at a time, 1 degree at a time.
The road trip
These are the organizations looking to explore. They don’t know where they want to go, but they want to experience state-of-the-art app dev and DevOps. They want to see what’s possible. Open Innovation Labs is equipped to dream with them, enjoy the ride, and find new and unexplored paths.
Working with Open Innovation Labs gives your team time away from the day-to-day whirlwind to figure out the future of your organization. You’re surrounded by your peers and dedicated, passionate Red Hatters, determined to find out where you want to go––seeking to get you there the best way possible. Open Innovations Labs focuses on:
- Getting your apps running.
- Container-based deployments through Red Hat Openshift, our container platform.
- Private and public cloud deployments, through OpenStack®, Amazon Web Services, Google Cloud Platform, Microsoft Azure, VMWare, and bare metal.
- Using Ansible by Red Hat and Ansible Tower by Red Hat for idempotent configurations.
- Building through a push-button infrastructure, so you can iterate faster, learn, and not be afraid of failure.
Learn more about Red Hat Open Innovation Labs.
Th-th-th-that's all folks!
Red Hat Summit 2017 has been packed with sessions. And those sessions have been packed. Containers, security, cloud, microservices, demos, automation, risk management, infrastructure, and middleware have all been featured over these 3 days. Not to mention training and labs. And, of course, all of the after-hours fun that accompanies Red Hat events.
Be sure to catch up on the other blog posts to learn more about other tracks, like containers, hybrid cloud, infrastructure, management, and security. We’ll see you next year.
The OpenStack word mark and the Square O Design, together or apart, are trademarks or registered trademarks of OpenStack Foundation in the United States and other countries, and are used with the OpenStack Foundation’s permission. Red Hat, Inc. is not affiliated with, endorsed by, or sponsored by the OpenStack Foundation or the OpenStack community.
About the author
Red Hat is the world’s leading provider of enterprise open source software solutions, using a community-powered approach to deliver reliable and high-performing Linux, hybrid cloud, container, and Kubernetes technologies.