Red Hat 블로그
Red Hat and Cisco have worked together for a long time, including our collaboration on Red Hat OpenStack Platform.
Out with the old...
By jumping into the high-density storage-optimized server market, Cisco validates what we see as the continued movement to emerging software-defined, scale-out architectures for solutions like OpenStack and container-native storage and hyper-converged infrastructure.
With the ability to spread data across multiple servers, both Red Hat Ceph Storage and Red Hat Gluster Storage are helping to drive this trend. Open, software-defined storage enables enterprises to build an elastic cloud infrastructure for newer, data intensive workloads.
Ceph provides unified storage over a distributed object store (RADOS) as its core by providing unified block, object and file interfaces, while Gluster provides an elastic, scale out NAS file storage system.
As more organizations move to open source SDS from appliances / traditional SAN arrays, they often miss the recipes for a best practice deployment. Red Hat has worked with Cisco to produce reference design architectures to take the guess work out of configuring throughput-optimized, cost / capacity-optimized and emerging high IOPs performing clusters, including whitepapers for both Red Hat Ceph Storage and Red Hat Gluster Storage with Cisco’s previous generation of the S-Series, the C3160 high density rack server.
Open source drives storage innovation
Both Ceph and Gluster use community-powered innovation to accelerate their core feature sets faster than what is possible via a single proprietary vendor. Red Hat is a top contributor to both Ceph and Gluster upstream development, but several hardware, software and cloud service providers, including eBay, Yahoo!, CERN (Ceph) and Facebook (Gluster), all contribute to the code base. Cisco itself is a top-50 contributor to Ceph in terms of code commits.
The Cisco UCS S-Series builds on the x86 storage-optimized server trend – but seemingly shuffles the deck with more of an enterprise spin via features such as dual-node servers, quadruple fans and power supplies, connected to Cisco UCS Fabric Interconnects.
One aspect of the new UCS S-Series design we are excited about is “versatility”. UCS offers common, consistent architecture for variety of IT needs that we expect may enable it to become a standard hardware building block for enterprise environments. S-Series includes features such as a modular chassis design, facilitating upgrades to new Intel chipsets including its disk expander module, providing the ability to swap out a server node for an additional 4 drives (increasing the raw capacity from 560 to 600 TB).
Cisco has also integrated networking fabric into its storage-optimized servers, making it easier to extend your interconnect as your cluster scales out. The S3260 offers dual 40GbE ports for each server node. As one moves to denser servers (with more than 24 drives) in Ceph configurations, the need for 40Gb Ethernet becomes greater. Enterprises can benefit from tightly-integrated fabric interconnect which translates to less latency, which is important for applications like video streaming.
A key piece is the UCS Manager configuration and handling tool which can simplify deployment. UCS Manager enables the creation of an initial configuration profile for storage, network, compute, etc. for the S3260, helping customers to more easily grow their Ceph environments by pushing out the profile to additional S3260s as they expand.
Combined with the Red Hat Storage ability to handle block, object and file access along with being flexible enough to handle throughput optimized, cost / capacity and high IOPS workloads, Cisco’s UCS S-Series may not just be a jack of all trades, but also a master of many.
Stay tuned for more upcoming joint solution papers from the Cisco UCS S3260 and Red Hat Ceph Storage teams. In the interim, learn more about the UCS S-Series at cisco.com/go/storage.