Data storage has come a long way since the days of disk systems. Sure, those disk systems might still be used here and there—but now they're attached to a network and software-defined so you have total control over exactly how it's formatted.
What is data storage?
Data storage is the process by which information technology archives, organizes, and shares the bits and bytes that make up the things you depend on every day—from applications to network protocols, documents to media, and address books to user preferences.
If you think about a computer (or a network, which is just a bunch of connected computers—servers included) like a brain, there’s short-term and long-term memory. In a brain, the short term memory is handled by the prefrontal cortex; in a computer, the short term memory is handled by random-access memory (RAM).
RAM is responsible for processing and remembering all the requests and actions of a computer during the time it’s awake. In the same way that you get tired after a full night of studying, RAM slows down the longer the computer is awake because it’s remembering everything that’s happened in the past as well as performing any new task that comes through in the present.
When you go to sleep, your brain converts your working memories into long-term memories in the same way that a sleeping computer clears its RAM by transferring it to a storage volume (like a hard drive, virtual storage node, or a pool of cloud storage).
A computer also distributes data to different storage volumes depending on what the data is (maybe 1 storage volume is dedicated to rich media, another is responsible for caching browser activity, and a third stores big data), in the same way that your brain distributes short-term memories depending on what the memory is (semantic, spatial, emotional, or procedural).
What is software-defined storage?
Software-defined storage (SDS) is part virtualization software and part storage management software: It abstracts the bits and bytes of data contained with hardware, formats the data into block, object, or file format, and organizes the data for network use.
SDS works particularly well with workloads based on unstructured data (like the object and block storage systems that containers and microservices rely upon), since it can scale in ways that hardwired storage solutions can’t.
It’s easier to understand SDS by comparing it to traditional appliance-based storage. Appliance storage bundles software and hardware together, but SDS decouples software from hardware and works with any industry-stand server or x86 virtualized resource. This eliminates reliance on specific hardware vendors and provides enterprises with a far more accommodating purchase process, where hardware is only purchased when more capacity is needed.
What is cloud storage?
When a physical resource like storage is virtualized and orchestrated by management and automation software, it becomes cloud storage. There are nuances to this description (the resource has to be made available on-demand through self-service portals supported by automatic scaling and dynamic resource allocation) but virtualization, management, and automation are the 3 foundational elements of any cloud resource—storage included.
Cloud storage is useful because it’s not always easy to estimate how much storage your enterprise needs, and buying massive amounts of capacity upfront is wasteful.
When storage is turned into a cloud resource, you can add or remove drives, repurpose hardware, and respond to changes without manually provisioning separate storage servers for every new initiative.
If your systems are designed using software-defined storage, you don't have to spend time rewriting applications and port them to support a specific cloud's storage services.
What is network-attached storage?
Network-attached storage (NAS) is a storage architecture that makes data more accessible among a network. A stripped-down operating system is installed on a box of hardware that’s no more complex than an ordinary server—hard drives, processors, random-access memory, and all.
This box (know as a NAS box, NAS server, NAS head, or NAS unit) becomes responsible for the entire network’s data storage, organization, and sharing functions.
Facilitated by transfer protocols that allow data to be shared among devices, NAS processes the entire network’s storage requests; giving an enterprise better performance, accessibility, and fault tolerance in 1 easy-to-install solution.
What is object storage?
An object is a piece of data paired with any associated metadata that provides context about the bytes contained within the object (things like how old or big the data is). Those 2 things—the data and metadata together—make an object.
The data stored in objects is uncompressed and unencrypted, and the objects themselves are arranged in object stores (a central repository filled with many other objects) or containers (a package that contains all of the files an application needs to run).
Objects, object stores, and containers are very flat in nature—compared to the hierarchical structure of file storage systems—which allow them to be accessed very quickly at huge scale.
Object storage and containers go hand-in-hand: Containers migrate from bare-metal environments to virtual machines and private clouds to public clouds far too often for most storage systems to keep up.
Traditional storage is difficult to port and file storage becomes onerous to navigate at the petabyte level, but objects contain just enough information for an application to find quickly and are free enough to store unstructured data like images and text files.
What is file storage?
File storage is the dominant technology used on direct- and networked-attached storage systems. It takes care of 2 things: organizing data and representing it to us.
With file storage, data is arranged on the server side in the exact same format we clients see it. This allows us to request a file by some unique identifier—like a name, location, or URL—which is communicated to the storage system using specific data transfer protocols. The result is a type of hierarchical file structure we can navigate from top to bottom.
File storage is layered on top of block storage, allowing us to see and access data as files and folders, but restricting access to the blocks that stand those files and folders up.
What is block storage?
Block storage splits a single storage volume (like a virtual or cloud storage node, or a good old fashioned hard disk) into individual instances known as blocks.
Each block exists independently of another and can be formatted with its own data transfer protocol and operating system—giving you complete configuration autonomy.
Because block storage systems aren’t burdened with the same investigative file-finding duties as the file storage systems that rely on blocks, block storage is a faster storage system. Pair that speed with their configuration flexibility and it makes them ideal for raw server storage or rich media databases.
How do I learn to use storage?
The way you learn to do anything else: practice. Deploying a new storage system is a lot smoother with training, and we have a ton of ways to make sure you’re ready. If you think you’re blessed with an innate knowledge of storage systems—or just want to see if you know enough to be dangerous—take this little storage quiz to assess your skill level. If you need some training, take a few courses from our cloud computing, virtualization, and storage curriculum, complete the whole thing, or take the ones required for you to get a Red Hat Certificate of Expertise in Hybrid Cloud Storage.
Software-defined storage is inherently open. It decouples hardware from software, freeing you from vendor lock-in. Red Hat has taken “open” a step further. Our software-defined storage is also open source. It draws on the innovations of a community of developers, partners, and customers. This gives you control over exactly how your storage is formatted and used—based on your business’ unique workloads, environments, and needs.
All the pieces you need to set up enterprise storage
A software-defined file storage platform to handle high-capacity tasks like backup and archival as well as high-performance tasks of analytics and virtualization. It works particularly well with containers and media streaming.
A software-defined object storage platform that also provides interfaces for block and file storage. It supports cloud infrastructure, media repositories, backup and restore systems, and data lakes. It works particularly well with Red Hat OpenStack® Platform.
The OpenStack word mark and the Square O Design, together or apart, are trademarks or registered trademarks of OpenStack Foundation in the United States and other countries, and are used with the OpenStack Foundation’s permission. Red Hat, Inc. is not affiliated with, endorsed by, or sponsored by the OpenStack Foundation or the OpenStack community.