A large telecommunications customer running Red Hat® OpenStack® Platform was using Generic Routing Encapsulation (GRE) tunnels with virtual local area networks (VLANs) to provide multitenant networking for their clients. This customer was running Red Hat OpenStack Platform 3 (based on the "Grizzly" upstream project). They encountered a performance issue within Open vSwitch that led to almost modem-like performance for some of their tenants when using large packets.
The underlying issue was with the kernel's network stack. It wasn't efficiently handling the combination of VLANs and GRE packets. The packet handling in the kernel had to be reworked to optimize for this scenario.
Red Hat immediately took steps to support the customer
First, we fixed the network issue in the upstream kernel and backported this fix into the Red Hat Enterprise Linux kernel used by our customers. But upstream kernel networking changes are often lengthy ordeals, and we needed a way to address the customer’s issue quickly.
To do this we made modifications in the upstream (Havana) OpenStack Network service (Neutron) that would work around the kernel limitation by selectively applying a different set of flow rules in this configuration. Then we backported this fix into the Grizzly OpenStack Network Service (Quantum).
Why tight engineering of technologies is essential
Triaging this issue required the deep technical knowledge of kernel network engineering and OpenStack Neutron developers, not to mention skilled staff in front-line support. It's a great example of where having only Python skills to write the management and orchestration layer isn’t enough to support a customer. Without the tight engineering of Red Hat Enterprise Linux and our OpenStack technology, this critical fix wouldn’t have been possible.