,欢迎您!
登录您的红帽帐户
尚未注册?下面是您应该进行注册的一些理由:
- 从一个位置浏览知识库文章、管理支持案例和订阅、下载更新以及执行其他操作。
- 查看组织内的用户,以及编辑他们的帐户信息、偏好设置和权限。
- 管理您的红帽认证,查看考试历史记录,以及下载认证相关徽标和文档。
您可以使用红帽帐户访问您的会员个人资料、偏好设置以及其他服务,具体决取决于您的客户状态。
出于安全考虑,如果您在公共计算机上通过红帽服务进行培训或测试,完成后务必退出登录。
退出红帽博客
Blog menu
All too often, enterprise IT teams are forced to react to the onslaught of data by creating storage silos, each with its own IT operations model. Traditionally, there is a storage silo for each application workload: database data; a silo for shared file data; a silo for web object data; and so on. This reactive approach can not only increase the capex for storage but can create a huge impact on on-going operational expenses – different management tools, different provisioning tools, different skill sets. Given the size and rapid growth of data, and the prohibitive cost of copying large data sets around the enterprise, enterprises simply cannot afford to build dedicated storage silos.
The ideal approach is to have all the data reside in a general enterprise storage pool and make the data accessible to many enterprise workloads. This provides a unified platform to procure, provision, and manage enterprise storage that is agnostic to the type of data such as files, objects, and semi-structured or unstructured data. By implementing these solutions, organizations will start to realize huge benefits in reduced operating expenses and increased service levels to end users.
In addition, a centralized approach to data management is no longer feasible in the age of big data. Data sets are too large, WAN bandwidth is too limited, and the consequences of a single point of failure are too costly. A big data storage platform must be able to manage data through a single, unified pool distributed across the global enterprise.
And, rather than attempting to protect against failure through the use of proprietary, enterprise-grade hardware, an open big data storage platform can assume that hardware failure is inevitable and offer reliable data availability and integrity through intelligent software. Accomplishing this requires a different approach by storage software vendors – one that is based on community-driven innovation. Community-driven innovation is the hallmark of a true open source approach to solving enterprise storage problems. For example, the emerging area of big data alone has more than 100 distinct open source big data projects with thousands of software developers contributing code, enhancing features, and increasing stability. It is hard to match this pace of innovation when software is being written within a vendor’s four walls.
For more information on how Red Hat Storage helps resolve the big data challenge by creating a unified enterprise storage platform, visit www.redhat.com/liberate.