피드 구독

Welcome to the final installment of our three-part series of Q&A's resulting from the launching of Red Hat Storage Server 3. In this part, we'll be looking at workloads.

IMG_0422

WORKLOADS

What kind of growth in big data do you feel we will see over the next five years?

Analyst estimates for the big data market range from 30% to 45% CAGR but there are a couple trends we hear from our customers that may be more significant than the growth figures. First, unstructured data is already about 80% of all enterprise data and it's growing faster than traditional structured data. Second, patch fixes to traditional systems do not address the performance, scale, and portability demands of big data workloads. To address big data challenges cost effectively, enterprises are looking towards agile, software-defined infrastructures that are purpose built for big data workloads.

How would the SW defined storage benefit Hadoop infrastructure?

Red Hat Storage Server provides additional choice and flexibility to Hadoop workloads. Hadoop programmers and administrators who are forced to work within the constraints of HDFS often complain that HDFS is not POSIX compatible, it has a single point of failure, and that they would like to avoid cumbersome data movement to and from their storage platform into HDFS for Hadoop analytics. The Hadoop plugin for Red Hat Storage addresses each of those concerns by enabling customers to keep data in-place and run not only Map-Reduce directly on top of Red Hat Storage Server, but also a number of Hadoop management and orchestration software such as Ambari, Oozie, Zookeeper, Sqoop, Flume, etc.

Beyond capacity how do you see the new workloads changing performance requirements of storage how does SDS evolve to address key metrics?

We see the workloads changing performance requirements of storage in two ways. The first is the focus on unstructured data - customers demand that their storage platform be optimized to store and retrieve any form of unstructured data at scale. The second is the approach taken by customers to build hybrid storage stack to address their SLAs. For instance, we find cybersecurity analytics customers resort to a hybrid storage model where software-defined storage is used to build a federated cold storage layer across index servers running direct attached storage. Much of the focus in the open software-defined storage community is to provide advanced data tiering and storage optimization features (such as erasure coding, bit rot detection, and support for NFSv4) to better enable these workloads by addressing metrics around datacenter utilization and security.

 


저자 소개

UI_Icon-Red_Hat-Close-A-Black-RGB

채널별 검색

automation icon

오토메이션

기술, 팀, 인프라를 위한 IT 자동화 최신 동향

AI icon

인공지능

고객이 어디서나 AI 워크로드를 실행할 수 있도록 지원하는 플랫폼 업데이트

open hybrid cloud icon

오픈 하이브리드 클라우드

하이브리드 클라우드로 더욱 유연한 미래를 구축하는 방법을 알아보세요

security icon

보안

환경과 기술 전반에 걸쳐 리스크를 감소하는 방법에 대한 최신 정보

edge icon

엣지 컴퓨팅

엣지에서의 운영을 단순화하는 플랫폼 업데이트

Infrastructure icon

인프라

세계적으로 인정받은 기업용 Linux 플랫폼에 대한 최신 정보

application development icon

애플리케이션

복잡한 애플리케이션에 대한 솔루션 더 보기

Original series icon

오리지널 쇼

엔터프라이즈 기술 분야의 제작자와 리더가 전하는 흥미로운 스토리