Selecione um idioma
About a decade ago, my wife and I moved from Philadelphia to San Francisco. We had both finished with graduate school in the Northeast and we decided it was time to strike out west. In retrospect, showing up in the the Bay Area in 2001 may not have been well-timed…but what did we know! The folly of youth. In any case, it was still hard to find an apartment though I am guessing a lot easier than right now. We found Noe Valley to be a great neighborhood and moved in fast. And it was especially quick given all we seemed to have brought from our university 1 BR apartment was a 4 person scratched-up round table, an old TV, and a couple of mattresses.
So, we figured we needed a couch at the very least. We went to a furniture store and found one that we really liked. And then the sales person said we had a choice. We could buy the really nice, handcrafted, leather sofa from Italy at about $5000. It would be customized so we could pick the color and grade of leather, and it would show up in about a couple of months according to our specs. Or, we could buy a Chinese replica of the design. Only instead of nice leather, the fabric choice was suede for the replica. And it was only available in a pale yellow color. But, it was a fraction of the price. And it could be in our apartment within a week. That was a really easy choice.
Faced with the same dilemma today, I would likely have the ability to wait it out for the exact sofa we wanted. It wouldn't be a burning need (we don't spend our time eating in front of a TV!). We would take advantage of financial flexibility on the cost. And the need for durability and quality would trump the convenience and price.
That tradeoff doesn't apply to every person. It also doesn’t apply to every organization facing the choice between handcrafted, custom application development or a more automated and speedier development cycle. And that applies to every layer of the software stack from applications to middleware and operating systems all the way to hardware and storage. Datacenters are complex and expensive to run. Upgrading and managing custom infrastructure is hard and costly. The rise of cloud computing is a direct outcome of the desire for organizations to reduce reliance on custom design at every layer and gain the benefits of speed for competitive advantage.
Workload portability is an essential ingredient to platform automation. Having an operating system that runs comfortably in the datacenter delivering mission critical workloads as well as scaling to support cloud demands is the foundation. Ensuring the middleware is built on open technology allows for flexibility in application development that is critical in a world of developer choice and polyglot affinity. All the scaffolding requires continuous build, integration, and source code management for the platform to be attractive to developers. Reducing pain in platform configuration and enhancing self-service for developers lowers barriers to adoption while facilitating rapid application development and deployment.
OpenShift was introduced in 2011 to address all the pain points described above. The starting point of the product (or service) design was the application developer that really just wanted to use familiar (or modern) tools and frameworks to singularly focus on building the greatest mobile game, sales productivity app or messaging service that he dreamt up. All the infrastructure around build and code management, configure, test, stage, deploy, and scale were the job of the Platform. This platform was architected for thousands of applications running in a multi-tenant environment supporting multiple languages and frameworks with an extensible framework that enabled integration of external systems like data stores and monitoring tools. And the platform itself was designed as a service to be accessed via a web console for developer self service and available on-demand. Stop worrying about the nuances of operating system and middleware configuration. Focus on the application. And leave the rest to PaaS.
Over the last two years, the PaaS market has grown and adoption skyrocketed. As thousands of applications have been built, configured, and deployed on the OpenShift public PaaS, we have found many requirements. For reasons of risk mitigation, governance, security policies, and regulations, a great number of workloads cannot immediately move to public clouds. In some cases, there is a skill shortage or a greater need for trust in public cloud environments. Regardless of the reasons, a sizable section of the market is interested in private cloud solutions. And many of those with the greatest need for the solution are large enterprises who rely on Red Hat for mission critical workloads and completely understand the benefits of application platform automation and standardization for their complex environments. Those are the target customers for OpenShift Enterprise - a private PaaS offering that provides the benefits of a public PaaS to organizations that can manage the overall infrastructure.
Key to a viable platform for the future is openness: open source, open APIs, open standards, open governance. An open platform leads to a community and it is clear that the future of long lived technology platforms is based on a vibrant community of users, contributors and evangelists. Linux, Hadoop, and MongoDB are all examples of communities aligned to technology platforms that enable a wide variety of users to feel comfortable in adopting the technology backed by an ecosystem of providers, implementors, and extenders. OpenShift Origin is a community in its early stages that serves as the focal point of user energy around PaaS that we are committed to encouraging others to participate in and drive forward. Innovation happens best through collaboration from a diverse set of voices and we are already excited by the contributions that the community has created.
Where go from here is clearly an exercise in crystal ball gazing. There are certainly providers that would make the case for a public cloud all the time. And there is a significant adoption and energy around a private cloud community. As we have found in our discussions with some of the largest and most sophisticated IT users in the world, the decision isn't binary - the need for "AND" is significant. For organizations that embrace "and," I predict that the open hybrid cloud will be their IT future.
Current trends indicate that OpenStack-based private clouds will become increasingly adopted as enterprises that live in a hybrid cloud reality embrace the value of IaaS from an open, innovative community platform. But addressing the scaling challenges in compute and network without a similar focus around application and data will only solve part of the problem and provide a portion of the automation efficiencies. Features in IaaS and PaaS will increasingly need to be linked, or at the very least coordinated, to realize the full value of a new cloud infrastructure. Provisioning, managing and monitoring existing datacenter workloads in addition to new applications on a private cloud and so-called "shadow IT" in the public cloud can take up a lot of unnecessary time and energy. Collaborating with providers that work seamlessly across all these worlds is what enterprises should be doing while being transparent to a user's experience.
I would also expect DevOps to evolve as cloud platform adoption becomes ever more prevalent in IT environments. Tools like Puppet and Chef are necessary today because the IaaS and PaaS layers of the cloud platform are still nascent and gradually developing. Application provisioning, patching, and lifecycle management will become an integrated feature set that can be partially controlled by service developers and fully configured by cloud platform administrators. As those duties and skills separate further, it would be reasonable for application and service developers to use a collection of tools and libraries to build immersive user experiences with specific capabilities without having to learn intricacies of system patching and compliance. On the other hand, cloud platform administrators will focus on managing and scaling the gamut of operating system, middleware, messaging, and testing environments for large and distributed instances that have been customized for individual use by a huge diversity of users. Workloads will need to be monitored and brokered over traditional and shared data centers in public, private, and hybrid clouds. Efficiencies will arise in smart cost brokering, differential service levels, and utilization ratios.
Its an understatement to say that its an exciting time to be in the IT industry right now. As these technologies sweep over us, we can expect to find tremendous value in the next few years with the agility and scale of automated application technologies underpinning PaaS and powering an open hybrid cloud near you.
About the author
Ashesh Badani is senior vice president of Cloud Platforms, responsible for leading Red Hat’s broad hybrid cloud portfolio, including product development and go-to-market strategy for Red Hat OpenShift, Red Hat OpenStack Platform, Red Hat Virtualization, Red Hat Cloud Suite, and Red Hat Cloud Infrastructure. In this role, Badani has helped to solidify Red Hat as a hybrid cloud and enterprise Kubernetes leader.