검색
한국어
한국어
로그인 Account
로그인 / 등록 Account
웹사이트
Datasheet

Red Hat Hyperconverged Infrastructure for Virtualization server configurations

업데이트 날짜: 2020년 4월 29일

Validated server configurations for Red Hat® Hyperconverged Infrastructure simplify and reduce the risk of designing and deploying hyperconverged computing infrastructure (HCI). Tested and optimized, these validated server configurations combine with Red Hat Hyperconverged Infrastructure for Virtualization to yield platforms that are durable and highly available. The validated server configurations help predictably consolidate infrastructure, producing operational efficiencies through optimized management workflows. In addition, unified storage and compute resources provide simplified, low-touch operations.

Red Hat Hyperconverged Infrastructure for Virtualization

Red Hat Hyperconverged Infrastructure for Virtualization offers an open, simple, and optimized platform for your application workloads in a small footprint. The platform integrates Red Hat Virtualization, Red Hat Storage, and Red Hat Ansible®  Automation Platform in a single solution to simplify deployment and management for remote office, small datacenter, and enterprise edge environments. By eliminating the need for a discrete storage tier, the solution removes many of the traditional burdens associated with acquisition, setup, and day-to-day operations—letting you focus on more valuable tasks. The benefits of open hyperconverged infrastructure include:

  • Infrastructure consolidation. Open hyperconverged infrastructure lets you consolidate to a smaller physical footprint. Deploying a smaller set of servers saves space and creates greater efficiencies.
  • Reduced risk. Red Hat Hyperconverged Infrastructure for Virtualization is built on a mature Red Hat infrastructure stack, including operating system, virtualization, software-defined storage, and IT automation technology.
  • Innovation without proprietary lock-in. Upstream communities deliver continuous open source innovation, yielding greater flexibility and lower costs without arbitrary proprietary limitations.
  • Datacenter transformation. Hyperconverged infrastructure is the first step toward more flexible and highly scalable datacenter management, allowing organizations to start small and grow over time without rip-and-replace upgrades.
  • Cost optimization. Hyperconverged infrastructure leads to lower operational costs by managing compute and software-defined storage resources together through a single, easy-to-use interface, with the ability to manage remote sites from a central location.
  • Containerization and virtualization. Open hyperconverged infrastructure lets you run virtualized workloads and containers on the same infrastructure, making them more straightforward and cost-effective to manage, including Kubernetes deployments.

Validated industry-standard configurations

Tested and validated server configurations help provide predictable performance and reduce risk for organizations adopting hyperconverged infrastructure. Configurations are based on HPE ProLiant DL360 Gen10 and HPE ProLiant DL380 Gen10 servers (Figure 1), configured as shown in Table 1.1
 

image container Figure 1. HPE ProLiant DL360 Gen10 (top) and HPE ProLiant DL380 Gen10 servers


For the testing, three-node Red Hat Hyperconverged Infrastructure for Virtualization clusters were assembled and validated using the different HPE ProLiant servers and distinct workloads. Red Hat evaluated both capacity-optimized and throughput-optimized server configurations to determine ideal server configurations for building clusters to serve different workload categories, as follows:

  • Capacity-optimized clusters for general server consolidation. Organizations increasingly want to consolidate general-purpose servers and proprietary storage appliances—particularly at edge locations—into a single hyperconverged server cluster. The cluster can replace special-purpose storage appliances and bare-metal servers while providing flexible, highly available virtual machines (VMs) with corresponding data protection. This workload category does not typically exhibit particularly demanding input/output (I/O) characteristics, instead requiring only basic I/O performance with adequate storage capacity.
  • Throughput-optimized clusters for demanding I/O operations. Throughput-optimized clusters must handle significant I/O requirements, such as ingesting a steady stream of data at an edge location. Operators deploying infrastructure to address this workload category typically seek to capture data from remote sensors and data acquisition equipment at the rate of data generation, while protecting against data loss through highly available VMs and protected storage. This workload category typically has high I/O throughput needs.

Table 1. Validated server configurations

  Capacity optimized Throughput optimized
Server platform HPE ProLiant DL360 Gen10  
Dual-socket processor
(tested)
Intel Xeon Silver 4116 (12 cores
per socket)
 
Memory/RAM (tested/
maximum)
128GB/256GB  
Network adaptor HPE Ethernet 10Gb 2-port
562FLR-SFP+
 
Input/output (I/O) controller HPE Smart Array P816i-a SR Gen10 (16 internal lanes, 4GB cache/
SmartCache) 12G SAS Modular Controller
Data drives  HPE 2.4TB SAS 12G Enterprise
10K SFF (2.5in) SC 3-year
warranty, 512e digitally signed
firmware hard disk drive (HDD)
 HPE 6TB SAS 12G Midline
7.2K LFF (3.5in) SC 1-year
warranty, 512e digitally signed
firmware HDD
Data drive quantity  8 12
Operating system (OS)
drives
HPE 960GB SATA 6G Read Intensive SFF (2.5in) SC 3-year
warranty digitally signed firmware solid-state drive (SSD)
OS drive quantity and
data protection
 2 (RAID 1)  2 (RAID 2)
Virtual machine (VM)
density (3-node cluster
example*)
 42 90
Maximum ingest throughput
(3-node cluster example**)
 320MiB/s  705MiB/s

* VM density calculated assuming average of four virtual central processing units (vCPUs) and 16GiB RAM per VM, with 300% CPU oversubscription and 150% memory oversubscription. This example also assumes that all VMs are high availability. In other words, all VMs can continue to run following the loss of one node in a three-node cluster.

** Ingest throughput was measured at 90/10 write/read I/O ratio via the fio open source benchmark utility.

Server testing approach

To validate the server configurations, Red Hat used the fio and the DVD Store 3 tools to test the capabilities of the three-node hyperconverged clusters. Tests measured both I/O throughput and latency at increasing workload scales. All experiments were performed from VM clients running within the Red Hat Hyperconverged Infrastructure for Virtualization cluster, transacting I/O with virtual disks backed by Quick EMUlator (QEMU) Copy on Write (qcow2) images stored within the hyperconverged trusted storage pool. Dual 10GbE network connections ran between cluster nodes, one for front-end VM access and one for back-end VM management, migration, and storage traffic.

Throughput tests were performed with the fio tool as 30-minute timed tests using 4GiB file sizes, 4MiB block sizes, a 90%/10% write/read mix, and the direct=1 setting. Each Red Hat Hyperconverged Infrastructure for Virtualization host ran one load-client VM while each client executed one fio job. Total write throughput was calculated as the sum of the three concurrent client jobs. Read throughput results were discarded. Following a warm-up test run, final results were then derived from an average of three test runs.
 

Red Hat anticipates that similarly configured industry-standard servers from other vendors would yield similar results.