님, 안녕하십니까?
Red Hat 계정에 로그인
아직 등록하지 않으셨습니까? 등록해야 하는 이유:
- 한 곳에서 기술 자료 문서를 탐색하고, 지원 사례와 서브스크립션을 관리하고, 업데이트를 다운로드 할 수 있습니다.
- 조직 내의 사용자를 보고, 계정 정보, 기본 설정 및 권한을 편집할 수 있습니다.
- Red Hat 자격증을 관리하고 시험 내역을 조회하며 자격증 관련 로고 및 문서를 다운로드할 수 있습니다.
Red Hat 계정으로 회원 프로필, 기본 설정 및 자신의 고객 상태에 따른 기타 서비스에 액세스할 수 있습니다.
보안을 위해, 공용 컴퓨터 사용 중에 Red Hat 서비스 이용이 끝난 경우 로그아웃하는 것을 잊지 마십시오.
로그아웃Red Hat 블로그
Blog menu
To take complete advantage of big data, enterprises must take a holistic approach and transform their view of storage from a ‘data destination’ to a ‘data platform.’ As a platform for big data and not just a destination for data storage, enterprise storage solutions need to deliver cost-effective scale and capacity; eliminate data migration and incorporate the ability to grow without bound; bridge legacy storage silos; provide global accessibility of data; and protect and maintain the availability of data.
1. Deliver cost-effective scale and capacity
To minimize cost, a big data storage platform must take a ‘scale-out’ approach, achieving scale by pooling industry-standard commodity servers and storage devices. This offers not only low costs today, but also the ability to benefit from increased buying power as hardware gets better, faster and cheaper over time. An effective big data storage system must also be scalable in the performance dimension, so that applications experience no degradation in performance as the volume of data in the system is increased.
2. Eliminate data migration
With enterprise data stores now approaching petabyte sizes, wholesale data migration is no longer logistically or financially feasible. A big data platform must address the requirement for periodic data migration by providing a system with the ability to grow without bound.
3. Bridge legacy storage silos
To be able to fully exploit the opportunities of big data, companies must be able to access and use all of their data without ad-hoc interventions. To enable this, a big data storage platform must bridge these legacy storage silos, rather than simply add yet another storage solution to the mix.
4. Provide global accessibility of data
A centralized approach to data management is no longer workable in the age of big data: data volumes are too large, WAN bandwidth is too limited, and the consequences of a single point of failure are too costly. A big data storage platform must be able to manage, as a single, unified pool, data that is distributed across the global enterprise.
5. Protect and maintain the availability of data
Rather than seeking to protect against failure through the use of proprietary, enterprise-grade hardware, a big data storage platform must assume that hardware failure is inevitable and offer data availability and integrity through intelligent software.
We expand on these must-haves for big data storage in a newly published white paper. If you’re interested in a deeper look into these areas, download that white paper here. Or, if you’re interested in Red Hat Storage, visit www.redhat.com/liberate.
About the author
Red Hat is the world’s leading provider of enterprise open source software solutions, using a community-powered approach to deliver reliable and high-performing Linux, hybrid cloud, container, and Kubernetes technologies.