As the world digitizes and software is eating the world, there is a growing expectation to have on-demand and customizable services for both enterprise and end users. As an end user, we expect sub-millisecond latency and jitter when playing online. We want to hear the goal at the same time as our neighbor when watching a game; and above everything, we want the video and audio to be stable when doing a video call.

All these things, and many more, require a lot of automation and orchestration. Software Defined Networking (SDN) is a piece to the puzzle. In this first post, we’ll provide an overview of the role it plays and how it has transformed the way communication service providers operate their network.

A brief history of SDN

The more we connect objects, the simpler the underlying connectivity should be, becoming a commodity that no one should have to pay attention to. Internet of Things (IoT) is a catalyst offering many possibilities and setting the bar for a high degree of networking automation (else operations will not scale).

Furthermore, as data has become a currency—and as data analytics, machine learning and artificial intelligence (AI/ML) are on the rise—the network needs to be efficient both from a capacity and a reliability perspective. 

SDN is about separating the control plane and the forwarding plane, enabling network control to become programmable and centralized and the underlying network elements abstracted from the applications and services.

In the early days of SDN, it was associated with OpenFlow, which is a protocol that can be used to realize L2/L3 switching, firewalls, and many more by leveraging a generic table-based pipeline. Network controllers can administer and configure the switches’ forwarding plane, as long as the vendor ecosystem implements and exposes the OpenFlow APIs. That way, based on the need or use case, the forwarding plane could be (re)configured dynamically through remote and centralized administration.

Expected outcome of SDN

The promise of SDN is to reduce the administration overhead of managing networks, making them more agile to adapt and adjust based on demand or need, through a centralized controller.

Some other expectations of SDN could include:

  • Provide visibility over the network state and enable service assurance (close/open-loop).

  • Adjust the network on demand or dynamically to deliver services or meet defined SLA.

  • Configure the network to enable or disable traffic patterns (i.e., traffic steering).

  • Configure the network to fulfill the needs of new workloads, and automagically enable cross-workload communication.

  • Remove the service specific network configuration when it is decommissioned, and adjust impacted network elements accordingly.

Telco expectation, and reality

SDN has indeed been very appealing to the communication service providers. 

They saw the opportunity to:

  • Move from complete vendor stack (lock-in) to a multi-vendor environment, increasing competition within the vendor space and potentially reducing capital investment.

  • Make the network programmable and reactive to event/failure, enabling close/open-loop (through more or less advanced AI/ML), resulting in a reduction of operational expenditure.

But of course, it doesn’t happen overnight, because:

  • The need to upskill network engineers to program the network and/or to build platforms enabling overall orchestration and control (basically, for them to become software architect and developer).

  • Now that telco wants a multi-vendor environment that is interoperable, they no longer adopt the software suite provided by vendors to manage the end to end services their solution provides. So managing and supporting the integration of the various vendor network elements, and making them work in harmony become their responsibility.

  • Network engineers have to arm wrestle with their network vendor partners so they expose programmable APIs (backward-compatible as the network element version evolves).

SDN protocols

A little after the invention of OpenFlow, SDN was broadened with the adoption of other network programmable protocols enabling configuration of network elements remotely, and providing additional pieces to the puzzle:

  • NETCONF (XML over SSH) - rfc4741 - 2006.

  • RESTCONF (XML/JSON over HTTP - REST) - rfc8040 - 2017.

  • gMNI/gNOI (gRPC over HTTP2) - 2018.

This is a non-exhaustive list, but these are the ones I see really driving momentum.

NETCONF brought many things enabling remote network configuration; to name a few:

  • Client-server connection-oriented session with SSH.

  • Democratization of YANG as a data modeling language (displacing the vendor defined XML schema-based configuration definition).

  • Remote Procedure Call (RPC) based operations.

  • Standardization of RPCs to query and configure a network element’s configuration/state.

  • Notion of state and datastore: configuration and operation datastore respectfully tracking the declarative requested state, versus the actual runtime state.

  • Network monitoring with a notification framework, subscription-based.

RESTCONF uses HTTP methods to implement the equivalent of NETCONF operations, enabling basic CRUD operations on a hierarchy of conceptual resources. […] The HTTP POST, PUT, PATCH, and DELETE methods are used to edit data resources represented by YANG data models. These basic edit operations allow the running configuration to be altered by a RESTCONF client”, as defined in the RFC8040.

It basically made the interaction with network elements even more trivial for developers.

gRPC Network Management Interface and gRPC Network Operation Interface brought a new paradigm for network element monitoring with bulk data collection through streaming telemetry. It also provides a way more effective underlying protocol for RPC, leveraging gRPC / HTTP2.

Another important thing to note is these three protocols heavily rely on YANG as a data modelling language (rfc6020 – 2010), which opened the door to model-driven programmability.

This enabled the open source networking communities to standardize network element configuration data model by providing a vendor-neutral solution. For a telco striving to abstract its underlying network infrastructure, and reaching a high level of interoperability, this has become very attractive.

The most adopted and mature models are the ones from OpenConfig, mostly for routers, and Open ROADM for optical elements. But not all the vendors support them, and there is a lot of mapping to perform between the vendor model and the OpenConfig model when trying to abstract the whole network with them.

How SDN has evolved

Initially, there was a proliferation of SDN controllers, and most of them really focused on OpenFlow, and its related Open vSwitch (OVS). But some of them took a different approach, and provided more of a platform where one could turn on the protocols they would care about (ONOS, OpenDaylight).

What really made SDN a thing is its adoption in OpenStack through Open Virtual Network (OVN) around 2016. OpenStack, created in 2010, really has proven the capabilities that SDN has to offer, and at the same time, made open source networking a real thing (it took a few years for this to happen though).

It also streamlined Network Function Virtualization (NFV), making itself the default platform for the telecommunication industry to run the vendor provided network functions.

Since then, a lot has happened in the open source community (and in the various standard bodies). To name a few, The Linux Foundation Networking (LFN) and the Open Networking Foundation (ONF) helped to bring together vendors, operators and enterprises. They both host a number of projects important to momentum and adoption (ONOS, P4, ONAP, OpenDaylight, Open vSwitch to name a few).

The virtualization of infrastructure

As software evolved and more systems became virtualized, telco saw the opportunity to have their network functions virtualized. By decoupling software from hardware, it enables the consumption of cost-effective commodity hardware, and optimization of the overall infrastructure.

For this to happen, telco had to arm wrestle again with network equipment vendors so they would make their network functions run outside of their custom build and dedicated hardware.

Also, making network functions virtual created a whole new domain of expertise, the infrastructure on which they run: Network Function Virtualization infrastructure (NVFi). As you virtualize, you add a layer of abstraction that has a cost in terms of networking, compute and memory— the impact on the end user is not acceptable.

The LFN and its community created projects aimed at integrating various software stacks and enabling the validation of vendor-provided Virtual Network Function (VNF) on standardized infrastructure: OPNFV. Cloud iNfrastructure Telco Task Force (CNTT) is another telco-centric open source initiative that is striving for similar goals.

Are vendor-provided VNF really successful, though? They still require an important amount of integration and customization of the underlying infrastructure to get the expected performance, defeating the initial promise of having a shared infrastructure.

In parallel, standard bodies have been standardizing the interfaces to control and manage these network functions and their related OSS/BSS stack (ESTI, TMForum, GSMA, etc.), but they are mostly geographically adopted. Whether you’re in EMEA, APAC or NA, not every telco wants the same standard, which makes things even harder for the vendor ecosystem.

A new era with containers

We've covered a lot of ground on the evolution of SDN and open source networking from inception to virtualization of infrastructure. In the next post we'll take a look at SDN, containers, service mesh and more.


About the author

Alexis de Talhouët is a Telco Solutions Architect at Red Hat, supporting North America telecommunication companies. He has extensive experience in the areas of software architecture and development, release engineering and deployment strategy, hybrid multi-cloud governance, and network automation and orchestration.

Read full bio