Con su cuenta de Red Hat puede acceder a su perfil de miembro y sus preferencias, además de a los siguientes servicios teniendo en cuenta su estado de cliente:
¿Aún no se ha registrado? Le ofrecemos varios motivos por los que debería hacerlo:
- Consulte artículos de la base de conocimiento, gestione casos de soporte y suscripciones, descargue actualizaciones y mucho más desde una misma ubicación.
- Consulte los usuarios de la organización, y edite la información, preferencias y permisos de sus cuentas.
- Gestione sus certificaciones de Red Hat, consulte el historial de exámenes y descargue logotipos y documentos relacionados con las certificaciones.
Con su cuenta de Red Hat puede acceder a su perfil de miembro, sus preferencias y otros servicios dependiendo de su estado de cliente.
Por seguridad, si está en un equipo de acceso público y ya ha terminado de utilizar los servicios de Red Hat, cierre sesión.Cerrar sesión
Digital transformation is about using technologies to achieve agility, responsiveness, elasticity, and speed. According to Ross Turk, Red Hat’s director of product marketing, that means telcos and other companies can respond to business demands more quickly and take advantage of success when it happens. So, what role does storage have in all of this? During his presentation Storage and Your Digital Transformation at Red Hat Summit last spring, Ross shared planning principles and solutions for bringing digital transformation into the core of the datacenter.
In his presentation, Ross frames digital transformation with three principles:
- Distributed deployment: In order to have that, you need to be able to deploy your technology across multiple datacenters.
- Generalized hardware: Today, time is expensive, and hardware is not. Companies are buying the same kind of hardware, and lots of it for repeatable procurement and installation.
- Decentralized control: Using things like Red Hat OpenShift, Red Hat Red Hat OpenStack Platform and public cloud to put technology into the hands of the people who need it.
The IT infrastructure whale
In most datacenters, said Ross, IT behaves like a big whale. Everything is tied together as one unit, so when you’re successful and need to support more infrastructure, you buy a bigger whale.
"I think infrastructure should work more like a school of fish in the shape of a whale," Ross told attendees. "You should be able to add fish and remove fish. If a disease comes along and eats half of your whale, you have a dead whale. If a disease comes along and eats half of your school of fish in the shape of a whale, you have a smaller school of fish in the shape of a whale."
According to Ross, this approach enables datacenter managers to add to and subtract from pooled infrastructure resources as the organization grows, delivering flexibility and agility for service expansion while keeping infrastructure more stable and reliable.
What makes storage challenging
"One of the most important, and expensive, parts of your infrastructure is storage," said Ross. But, he noted that it’s also a challenging area to optimize for emerging workloads and to improve agility. In his presentation, Ross outlined the following reasons storage is challenging:
- Storage is essential: Nothing we put in the datacenter works without storage. Every workload users deploy requires an operating system and durable, flexible storage.
- Data has gravity: Once data has been loaded into a storage system, it can be very expensive and time-consuming to move it.
- Storage solutions are sticky: Consequently, once a storage solution has been chosen and deployed, it can become extremely difficult to choose something different.
- Storage is not one-size-fits-all: There are different kinds of storage such as block storage, file storage and object storage. And storage needs vary, meaning that not all storage workloads come in the same size and shape.
- Storage admins are changing: Traditionally, storage admins work on teams that specialize in storage, but modern storage admins operate large-scale platforms and consider the complete picture.
Ross further pointed out that appliances aren’t enough, because when you hide complexity, you also hide flexibility. And public cloud storage isn’t enough either, he said, due to cost. "Pay-as-you-go is expensive and the cost to get data out is expensive," so public cloud storage isn’t a sustainable solution for the long-term for most organizations.
Pairing these challenges with the need for stable and reliable infrastructure to support an ever-growing digital footprint means that storage needs to be re-envisioned outside of traditional storage appliances. Ross shared 2016 research from Red Hat, conducted by Vanson Bourne Ltd., that shows the industry is re-thinking storage.
According to the report, nearly 40 percent of the IT decision makers surveyed cited inadequate storage capabilities as a top weekly pain point, and 70 percent state that their organization’s current storage can’t cope with emerging workloads. The solution? The research shows that 98 percent of IT decision makers surveyed believe a more agile storage solution could benefit their organization.
Ross suggested that a new approach to software-defined storage is the answer. In the presentation, Ross shared Red Hat’s definition for software-defined storage as:
- Server-based: The use of software and standard hardware to provide services traditionally provided by single-purpose storage appliances
- Centralized control: The ability to provision, grow, shrink and decommission massively-distributed storage resources on-demand and programmatically
- Open ecosystem: If you’re building software that allows people to choose the hardware they deploy it on, you need to have the ecosystem to support that choice.
Red Hat storage behaves like software. The provisioning and the service layer on top of the physical hardware behaves like software, making it so storage can scale and behave just like everything else in your datacenter. In the presentation, Ross walked through the similarities and differences of Red Hat’s storage options:
- Red Hat Ceph Storage is a more intricate architecture. It is fundamentally a scale-out object store.
- Red Hat Gluster Storage is a more streamlined architecture, with fewer moving parts, less overhead and smaller minimum node count. It is fundamentally a distributed file system.
As the two options relate to different workloads, for things that are close to the application, Red Hat generally prescribes Gluster, and for things that are close to the infrastructure it prescribes Ceph.
For those unsure of where to go next, Ross offered some simple questions to start with:
- What storage technology are you currently using?
- Does it scale flexibly and cost effectively at petabyte scale?
- What challenges are you facing with your current storage infrastructure?
- How do you manage provisioning, capacity, planning, and migrations with your current storage infrastructure?
- Is your current storage infrastructure capable of handling your needs next year?
He also shared Red Hat’s Storage Savings Calculator as a tool which provides a rough calculation of whether you’re likely to save money with software-defined storage solutions like Red Hat offers.
Check out Ross’ full presentation for more details on everything previewed here, and get your datacenter ready for digital transformation.