Virtualization technologies have evolved such that support for multiple networks on a single host is a must-have feature. For example, Red Hat Enterprise Virtualization allows administrators to configure multiple NICs using bonding for several networks to allow high throughput or high availability. In this configuration, different networks can be used for connecting virtual machines (using layer 2 Linux bridges) or for other uses such as host storage access (iSCSI, NFS), migration, display (SPICE, VNC), or for virtual machine management. While it is possible to consolidate all of these networks into a single network, separating them into multiple networks enables simplified management, improved security, and an easier way to track errors and/or downtime.
The aforementioned configuration works great but leaves us with a network bottleneck at the host level. All networks compete on the same queue in the NIC / in a bonded configuration and Linux will only enforce a trivial quality of service queuing algorithm, namely: pfifo_fast, which queues side by side, where packets can be enqueued based on their Type of Service bits or assigned priority. One can easily imagine a case where a single network is hogging the outgoing link (e.g. during a migration storm where many virtual machines are being migrated out from the host simultaneously or when there is an attacker VM). The consequences of such cases can include things like lost connectivity to the management engine or lost storage for the host.
A simple solution is to configure
known network-hogs (e.g. migration) on top of a separate host NIC. However, this wastes bandwidth and hardware resources.
Red Hat Enterprise Virtualization 3.6 introduces a feature called host network QOS to solve the above mentioned challenge. Because Red Hat Enterprise Virtualization is co-engineered with Red Hat Enterprise Linux 7, virtualization administrators can leverage a plethora of network QOS algorithms called qdiscs - queueing disciplines. The administrator has options and can configure the algorithm on the host NICs. TC (the traffic control command line tool) has the ability to classify different kinds of traffic and apply different priorities to them. Administrators who love the command line can fine-tune these algorithms to their liking. However, for those who want to use the GUI, RHEV-M (RHEV’s Management interface) makes things easier. In addition, configuring QOS via RHEV-M is manageable in multiple layers (data centers, networks, hosts), persistent in the engine and will also persist host reboots.
RHEV-M leverages the HFSC (hierarchical fair service curve) implementation that is embedded in Red Hat Enterprise Linux 7. HFSC allows the administrator to ensure both bandwidth and latency with relatively low performance overhead.
Using HFSC, the administrator can configure a few invariants on the egress traffic including:
- Link sharing. Ensure that a certain class of traffic will not exceed a certain share of the bandwidth during times where there is contention on the NIC.
- Real Time. Allow bursts of traffic to temporarily breach the link share, in order to ensure lower latency.
- Upper limit. A hard upper limit on the egress traffic.
RHEV allows an easy way to configure HFSC. Under the hood, RHEV-M uses TC to configure everything needed to classify the traffic correctly and apply the desired values of QOS to the networks the administrator needs.
It is important to note that since the separation of networks in RHEV is performed via VLAN tagging, the traffic is automatically classified by its VLAN tag and this is currently the finest classification of granularity that is supported.
Let’s take a look at a real-life example of the feature being implemented: a management network and a VM network sharing the same NIC. While the VM network can be expected to consume high bandwidth (as there may be many busy VMs running on the host simultaneously), the management network is expected to require a much lower bandwidth. However, in terms of latency, the management network may be more sensitive than the VM network. In other words, at times when the NIC is saturated, we can allow short delays in VM traffic, but we would like to ensure that packets sent back to the engine via the management network experience a certain minimal latency - else the engine might consider the host unreachable. To remedy this, a configuration of a high link share for the VM network, and a low link share for the management network can be applied. As a result, HFSC will allow bursts of management traffic even while the link is saturated.
To take this new feature for a spin, visit our download page.
This post was authored by Ido Barkan before leaving Red Hat earlier this year. While Ido will be missed, his work continues in the capable hands of the Red Hat Enterprise Virtualization team.
저자 소개
채널별 검색
오토메이션
기술, 팀, 인프라를 위한 IT 자동화 최신 동향
인공지능
고객이 어디서나 AI 워크로드를 실행할 수 있도록 지원하는 플랫폼 업데이트
오픈 하이브리드 클라우드
하이브리드 클라우드로 더욱 유연한 미래를 구축하는 방법을 알아보세요
보안
환경과 기술 전반에 걸쳐 리스크를 감소하는 방법에 대한 최신 정보
엣지 컴퓨팅
엣지에서의 운영을 단순화하는 플랫폼 업데이트
인프라
세계적으로 인정받은 기업용 Linux 플랫폼에 대한 최신 정보
애플리케이션
복잡한 애플리케이션에 대한 솔루션 더 보기
오리지널 쇼
엔터프라이즈 기술 분야의 제작자와 리더가 전하는 흥미로운 스토리
제품
- Red Hat Enterprise Linux
- Red Hat OpenShift Enterprise
- Red Hat Ansible Automation Platform
- 클라우드 서비스
- 모든 제품 보기
툴
체험, 구매 & 영업
커뮤니케이션
Red Hat 소개
Red Hat은 Linux, 클라우드, 컨테이너, 쿠버네티스 등을 포함한 글로벌 엔터프라이즈 오픈소스 솔루션 공급업체입니다. Red Hat은 코어 데이터센터에서 네트워크 엣지에 이르기까지 다양한 플랫폼과 환경에서 기업의 업무 편의성을 높여 주는 강화된 기능의 솔루션을 제공합니다.