Video

The Road to Open Hybrid Cloud: Part 1 - Bare Metal to Private Cloud

Informazioni su questo video

How can you get the benefits of open hybrid cloud fast? Watch these step-by-step demos from Red Hat Summit that show how to go from bare metal servers to a flexible, scalable infrastructure that supports your business applications.

Part 1: Live on-stage, we take a multi-vendor rack of servers, storage and networking hardware and turn it into a private cloud using OpenStack Director making it ready for developers to build next-generation applications with OpenShift.

To learn more, visit: https://www.redhat.com/en/challenges/cloud-infrastructure

Canale video
Events
Durata
3:55
Data

Trascrizione

[Burr] We've learned a lot from public cloud providers, specific efficiencies and capabilities, and we've taken those same capabilities now, and brought them to your private data center. And what you'll see in this demonstration for Red Hat Summit is specifically us taking an on-stage rack with multiple hardware providers, all racked into it, and we lit it up with OpenStack director, making OpenShift ready for developers to build their next generation application workloads.

[Angus] This is an impressive rack of hardware that Jay has brought up on the stage. What I want to talk about today is putting it to work. Director deploying OpenShift in this way is the best of both worlds. It's bare metal performance but with an underlying infrastructure as a service that can take care of deploying in new instances and scaling out and a lot of the things that we expect from a cloud provider. Director is running on a virtual machine on Red Hat Virtualization at the top of the rack. It's gonna bring everything else under control. What you can see on the screen right now is the director UI and as you see, some of the hardware in the rack has already being managed. At the top level we have information about the number of cores and the amount of RAM and the discs that each machine have. If we dig in a bit, there's information about MAC addresses and IPs and the management interface, the BIOS, the Kernel version. Dig a little deeper and there is information about the hard discs. All of this is important because we want to be able to make sure that we're putting workloads exactly where we want them. Jay, could you please power on the two new machines at the top of the rack?

[Jay] Sure.

[Angus] So when those two machines come up on the network, director is gonna see them and see that they're new and not already under management and it's immediately gonna go into the hardware inspection that populates this database and gets them ready for you. Profiles of the way that we match the hardware in the machine to the kind of workload that it's suited to. But director scales up to data centers so we have a rules matching engine which will automatically take the hardware profile of the new machine and make sure it gets tagged in exactly the right way. We have a set of validations. A lot of our validations actually run before the deployment. They look at what you're intending to deploy and they check the environment is the way that it should be and they'll preempt problems and obviously preemption is a lot better than debugging.

[Burr] I love how the introspection and the validation's basically ensured that we're putting the right software on the right hardware. Now we're ready to hit that provisioning button and actually install OpenShift.

[Angus] If I switch over to the deployment plan view, there's a few steps. First thing you need to do is make sure we have the hardware. Next thing is deployment configuration. This is where you get to customize exactly what's gonna be deployed to make sure that it really matches your environment. As you can see on the screen, we have a set of options around enabling TLS for encryption network traffic. If I dig a little deeper, there are options around enabling IPV6 and network isolation so that different classes of traffic go over different physical niches. Director comes with a set of roles for a lot of the software that Red Hat supports and you can just use those or you can modify them a little bit if you need to add a monitoring agent or whatever it might be or you can create your own custom roles. So the roles we have right now are gonna give us a working instance of OpenShift. If I go ahead and click through, the validations are all looking green so right now I can click the button start to the deploy and you will see things lighting up on the rack, director's gonna use IPMI to reboot the machines, provision them with a rel image, put the containers on them, and start up the application stack. I need to hand this over to our developer team so they can show what they can do with it. Thank you.

[Burr] So that was super cool. You saw us taking bare metal hardware and lighting it up with OpenStack director and making OpenShift ready for application workloads. What we're about to do next is take the on-stage private cloud that we created earlier and take an application workload and scale it out across the hybrid cloud.