Blog da Red Hat
If you look at Red Hat’s product portfolio, you’ll find one thing in common across the board — everything we ship today was once an immature technology you’d have been unwise to put into production. In some cases, it was even technology that may not have appeared to have production applications. The technologies we depend on today were once emerging technologies that weren’t guaranteed to make it into production.
What are emerging technologies?
At Red Hat, we define emerging technologies as new technologies at an industry level that represent fresh and relevant open source projects. When using the word “emerging” to describe a new industry trend, software or application, it will typically fall into one of three areas:
- Emerging technology. This would be newer technologies that are commercialized in the market by some, maybe they are still getting their feet underneath them and have more grassroots activity and less commercial activity where the open source project is really the core focal point for the technology or collection of open source projects.
- Emerging market. At an industry level, these are not necessarily specific to Red Hat. These are market spaces that are taking existing technologies and building solutions into a new space. An example might be the digital transformation that the communications service provider industry is undergoing, where technologies like network functions virtualization (NFV) and the Internet of Things (IoT) are creating new market capabilities largely from existing technologies. In some cases, additive technology may be required to build or optimize a full solution.
- Emerging products for us are newer products in our product portfolio that have a smaller customer base or may just be getting started in their lifecycle of our product portfolio. We have our core products like Red Hat Enterprise Linux (RHEL) and Red Hat Middleware. We also have a newer set of products that would fit into the cloud space like Red Hat OpenStack, Red Hat OpenShift and storage. These are emerging for us as a company, not necessarily technologies. There is not necessarily a 1:1 correspondence that these are emerging products and these have to be built from an emerging technology.
Examining the pace of innovation
The innovation and evolution of existing technology is not going to slow down, it’s going to accelerate. The reason the pace of change will continue to advance is you’re seeing the rate of activity in open source projects continue to grow. For example, newer projects like Kubernetes and Ansible have significant velocity in terms of contributors, commits, and pull requests, while the velocity of development for the Linux kernel has continued unabated.
You’ll find a direct relationship between the amount of activity in open source projects and the amount of change we see in the technology landscape. The “digital transformation” focus for many of our customers, becoming adept at technology and delivering software that’s important to the business at an increasing pace, is being driven by open source and drives it in turn.
We are contributing to this evolution the way we always have, which is by rolling up our sleeves and getting directly involved as engineers in open source community projects critical to our customer base and our business that are driving all of this change. The Office of the CTO spends our time looking at longer term industry trends and related open source projects to identify emerging areas that aren’t currently on our product teams’ roadmaps. We use these product roadmaps help prioritize our scope of work over the next six, 12, or 18 months. As we look ahead over the 18 month, three year, or five year horizon, we need to make sure we’re helping our customers successfully navigate these industry changes, as the work we’re doing now may very well impact existing products — such as the Knative and Istio projects we’ll talk about a little later — or it could conceivably help us identify new product areas to invest in.
A glimpse into 2019
Red Hat has been looking to an open hybrid cloud as the future for years, and we’re now seeing hybrid cloud functionality as an increasingly popular aspect of technology and solutions. In 2018, we saw activities from large public cloud providers that brought some of their technologies to live on-premise, which shows us that the customer’s realities are hybrid. What that means for us is we’ll continue to stay the course and bring broader industry focus to this space of hybrid cloud.
Kubernetes leads the way
We believe that Kubernetes is at the core of the open hybrid cloud. It has emerged as the de facto standard for application-focused clustering technologies. Cluster management, cluster scheduling, and cluster orchestration are all in Kubernetes’ wheelhouse and it has really cemented its role in the industry in the past year. Cloud-native applications require a level of resource elasticity and are potentially responding to a global-scale load that requires a tool like Kubernetes to scale applications.
Just as Linux emerged as the focal point for open source development in the 2000s, we see Kubernetes emerging as a focal point for building technologies and solutions (with Linux underpinning Kubernetes, of course.) A great example of this is Knative, a Kubernetes-based platform designed to offer a Kubernetes-native API for implementing serverless type functions, or to ease deploying applications and containers.
While serverless isn’t new to the industry, it’s really exciting to see a fully open source project that melds well with the Kubernetes ecosystem and has a real chance of maturing and becoming a standard in its own right.
Alongside serverless, we see the service mesh concept taking off. A service mesh is essentially platform-level automation for creating the network connectivity required by microservices-based software architectures. Istio is one service mesh implementation that we’ve been working with. Now in Technology Preview for OpenShift, Istio is also targeted for Kubernetes and has gained a lot of mindshare.
Another interesting emerging trend in the Kubernetes space is increasing interest from organizations who want to run Kubernetes on bare metal servers. When you combine this with the project KubeVirt that gives Kubernetes the ability to manage virtual machines, you have Kubernetes managing both containers and virtual machines side by side. This is a really powerful combination, enabling teams to focus on a single cluster management tool and cluster scheduler, Kubernetes, to support the on premise portion of their hybrid cloud workloads.
Machine learning and AI
You can’t really discuss emerging technology without talking about artificial intelligence (AI) and machine learning (ML). These are not new by any stretch, but they have been difficult for companies to implement and are largely the domain of pure data scientists.
As of late, we’re seeing projects that are trying to make AI and machine learning more accessible to software developers and lifecycle managed the way development and operations teams are used to doing lifecycle management. One project we have our eyes on is Kubeflow, a machine learning toolkit for Kubernetes. The idea behind Kubeflow is to make it simple to scale machine learning models and deploy them to production wherever Kubernetes is running. Once again, we see Kubernetes as the target platform of choice.
Hardware acceleration is a great way to improve the performance of AI and ML workloads. Making hardware like graphics processing units (GPUs) or field programmable gate arrays (FPGAs) easy for applications to access is another key to the success of ML workloads. Kubernetes has been focused on making it easy for developers to access these kinds of hardware accelerators from within containerized applications, so you can expect to see an increasing number of hardware accelerated machine learning workloads targeting Kubernetes.
Outside of all the action with Kubernetes, we have also been paying attention to the service communication provider market space and the push from 4G to 5G. The transition to 5G is a major industry push that’s partially gated by standard body activities to create formalized specifications of what 5G means. That process is nearing completion and you are starting to see the early stage roll outs of trials of 5G networks. That’s more of an emerging market.
We’re also applying some existing technologies into that market space and there are a lot of interesting and open questions about 5G and implications for the next generation computing architecture. 5G is a network-focused infrastructure change, but it enables things like edge computing.
5G and edge computing are separate concepts, but together they can create something that is interesting and the industry is right at the beginning of that trend. In addition to the technology questions, you have interesting questions that come along with the technology like what’s the real business value of 5G? It requires investing in things like like new phones and radios and all the physical infrastructure that 5G needs, but also building new applications and services that can take advantage of the improvements that a 5G network brings — reduced latency, increased bandwidth, and increased connection density.
We have already been working on the software that may accompany this physical infrastructure and building open source platforms that we expect to support this next generation network. These platforms can become solutions that enable edge computing, which can create the ability to push a set of computing workloads away from the device and centralized networks to the edge of the network. This could create a new tier in the architecture, which today is essentially two-tier — devices to the data center or devices to cloud. This third tier, the edge, enables data processing to happen close to data sources (devices, or “things” in an IoT architecture) and is expected to support a new class of latency sensitive applications. The edge is typically defined by a large number of smaller clusters, so a key area of exploration in 2019 will be tools to provision, deploy, and lifecycle manage a distributed collection of edge clusters.
It’s not emerging in the sense that you’ve never heard about it before 2018; what’s happening is in 2018 you started to see the first real early trials attempting to build this infrastructure and make sense of what is capable at the edge.
A year of chopping wood and carrying water
One of the perks of following emerging technology is when it actually starts to become more widely used and transition to the mainstream. It’s very satisfying to see things we’ve been part of as nascent projects begin to mature and be chosen for real-world workloads.
In 2019 with Kubernetes and its ecosystem of projects like Istio, Kubeflow, and Knative, we expect to see a great deal of that. The next set of emerging technologies seem likely to spring from that set of tools, and we’ll be watching and ready to roll up our sleeves to help bring those to maturity as well.
Chris Wright is chief technology officer at Red Hat.
About the author
Chris Wright is senior vice president and chief technology officer (CTO) at Red Hat. Wright leads the Office of the CTO, which is responsible for incubating emerging technologies and developing forward-looking perspectives on innovations such as artificial intelligence, cloud computing, distributed storage, software defined networking and network functions virtualization, containers, automation and continuous delivery, and distributed ledger.