Yesterday at Red Hat’s Open5G event, I was able to sit down with Fran Heeran, senior vice president and general manager of Nokia's Core Networks business, to talk more about our strategic partnership to deliver Nokia's core network applications together with Red Hat's industry-leading cloud infrastructure platforms. With this alliance, we can pair the full power of open source platforms with Nokia’s core network applications, providing customers with even greater choice and flexibility.
It all starts with the horizontal cloud—a platform capability that spans a service provider’s own network, from the core datacenter to the edge. In my conversation, Fran took us back to telco infrastructure origins for context—from bare metal, to cloud, to virtualization, to containers, to where we are now with a horizontal cloud. Nokia customers, and Red Hat’s too, have started to take a horizontal approach to their plans, meaning they’re making more open, less siloed decisions in their infrastructure, strategy and portfolio. Essentially, they want more open cloud architectures. Nokia is focused on building the best network applications in the world, so the partnership with Red Hat was the next logical move.
Part of this collaboration was realizing that service providers are still tackling major technology transitions as part of the collective shift toward IT and network transformation. Many service providers have extremely functional applications that are built as virtual network functions (VNFs) on virtual machines (VMs), where there’s not a lot of expansion or growth needs—just stabilization. In these cases, we’re working together to help customers focus on stabilizing and waiting for a natural transition point, whether it's introducing new hardware or functionality with a new application, and using that as a way to transition to cloud-native network functions (CNFs). Together, Nokia and Red Hat can better meet customers where they are—anticipating their needs and making sure they have the technology that makes the most sense for them, when and where they need it. But we’re also thinking big and considering the evolutionary picture. What are these natural industry progressions? The journey to cloud-native, certainly. The beauty of containers is the packaging format that is used to deliver those applications, and the functionality that enables usage in a wide variety of infrastructures, whether it be bare metal, out to the edge or in a hybrid cloud environment.
Where does the edge begin and end? How do we build for what struggles to be defined?
Fran mentioned—and I think this is crucial to point out—that in the service provider world, no two people’s version of edge seems to be the same. Everyone has a different view on edge, and we’re getting to a place where the network is much more distributed. He mentioned (and I agree) that this will continue into advanced 5G and onto 6G as well. With this comes the discussion around how we build applications—common architectures or purpose built. Fran asked, “I think that's going to be a big factor for application developers. Do I have to write two different versions of my application or know in advance where it needs to be? Or do we see the cloud infrastructure playing a role in providing that kind of abstraction between the underlying differences in location and hardware?”
I think this is an interesting question. Let’s look at this from a developer point of view, and maybe even more broadly: In an ideal world, I think the preferred outcome would be to have an application that can be developed once and deployed anywhere. There's a runtime choice of deploying where it needs to go to serve the end customer with the right latency boundaries and performance boundaries. But it can be use case dependent.
With orchestration, you can choose to place an application in one location but also make it accessible elsewhere so that it can be spun up in all relevant distributed sites. However, it may not mean that you can deploy that application just anywhere. Even in a standard, large, virtually centralized public cloud, there are applications that need specific hardware features, such as accelerators for different types of workloads.
So, as much as we’d like it to be perfectly homogeneous, the reality is we'll have certain areas where we have very specialized needs. For some, we'll have to be able to deploy applications to specific locations. Others will be a little more fungible. This is the argument for better flexibility and choice of exactly where you deploy.
This is another place our partnership comes in handy. Nokia is prioritizing development of applications to fit the hardware underneath, giving customers the ability to run in different locations where they need it.
If you’d like to learn more about our partnership with Nokia, you can check out our newsroom here.
About the author
Chris Wright is senior vice president and chief technology officer (CTO) at Red Hat. Wright leads the Office of the CTO, which is responsible for incubating emerging technologies and developing forward-looking perspectives on innovations such as artificial intelligence, cloud computing, distributed storage, software defined networking and network functions virtualization, containers, automation and continuous delivery, and distributed ledger.