ProductsServer Desktop & Workstation Developer Subscriptions Satellite OpenStack Platform For IBM POWER For SAP Business Applications Management For Scientific ComputingExtended Update Support High Availability High Performance Network Load Balancer Resilient Storage Scalable File System Smart Management Extended Lifecycle SupportA-MQ Accelerate Automate Integrate Application Platform BPM Suite BRMS JBoss community or Red Hat JBoss Middleware Data Grid Data Virtualization Developer Studio Portfolio Edition Fuse Fuse Service Works Operations Network Portal Web Framework Kit Web Server
SolutionsWhy Red Hat Why open hybrid cloud? The new IT Public cloud Cloud resource library Private cloud Infrastructure-as-a-Service (IaaS) Platform-as-a-Service (PaaS) Cloud applications and workloadsSolaris to Red Hat Enterprise Linux Migration overview Migrate from your UNIX platform How to migrate to Red Hat Enterprise Linux Upgrade to the latest Red Hat Enterprise Linux release Red Hat JBoss Middleware Benefits of migrating to Red Hat Enterprise Linux Migration services Start a conversation with Red Hat
TrainingPopular and new courses Red Hat JBoss Administration curriculum Core System Administration curriculum Red Hat JBoss Middleware development curriculum Advanced System Administration curriculum Linux Development curriculum Cloud Computing, Virtualization, and Storage curriculum
ConsultingSOA and integration Business process management Custom Software Development Enterprise Data and Storage Systems management Migrations
Red Hat Contributes Apache Hadoop Plug-in to the Gluster Community
October 28, 2013
The Red Hat Storage Team
We are excited to announce the contribution of our ApacheTM Hadoop® plug-in to the Gluster Community, the open software-defined storage community. Now, Gluster users can deploy the Apache Hadoop Plug-in from the Gluster Community and run MapReduce jobs on GlusterFS volumes, easily making the data available to other toolkits and programs. Conversely, data stored on general purpose filesystems is now available to Apache Hadoop operations without the need for brute force copying of data to the Hadoop Distributed File System (HDFS).
The Apache Hadoop Plug-in provides a new storage option for enterprise Hadoop deployments and delivers enterprise storage features while maintaining 100 percent Hadoop FileSystem API compatibility. The Apache Hadoop Plug-in delivers significant disaster recovery benefits, industry-leading data availability, and name node high availability with the ability to store data in POSIX compliant, general purpose filesystems.
The advantages of the Hadoop Plug-in in the Gluster Community include:
- supporting data access through several different mechanisms/protocols – file access with NFS or SMB, object access with SWIFT and access via the Hadoop file system API;
- eliminating the centralized metadata (name node) server;
- compatibility with MapReduce and Hadoop-based applications;
- eliminating code rewrites; and
- providing a fault tolerant file system.
To download the Apache Hadoop Plug-in, users can go to https://forge.gluster.org/hadoop/. For the Apache Hadoop Ambari Project, users can visit the Apache Hadoop Community at http://hadoop.apache.org/.