Subscribe to the feed

The Multi-Cloud Object Gateway is a new data federation service introduced in OpenShift Container Storage 4.2. The technology is based on the NooBaa project, which was acquired by Red Hat in November 2018, and open sourced recently. More information can be found here

The Multi-Cloud Object Gateway has an object interface with an S3 compatible API. The service is deployed automatically as part of OpenShift Container Storage 4.2 and provides the same functionality regardless of its hosting environment.

Simplicity, Single experience anywhere

In its default deployment, the Multi-Cloud Object Gateway provides a local object service backed by using local storage or cloud-native storage, if hosted in the cloud.
Every data bucket on the Multi-Cloud Object Gateway is backed, by default, by using local storage or cloud-native storage, if hosted in the cloud. No additional configuration is required.
The Multi-Cloud Object Gateway’s object service API is always an S3 API, which means a single experience on-premise and in the cloud, for any cloud provider. This translates to a zero learning curve when moving to, or adding a new cloud vendor. That translates into greater agility for your teams.


The administrator can add multiple backing stores and apply mirroring policies to create hybrid data buckets and multi-cloud data buckets, using cloud-native storage providers and/or on-prem storage providers. Each bucket can have its own data placement policy and can be changed over time, to support the changing needs of applications and environments.

Integrated Monitoring and Management

The Multi-Cloud Object Gateway leverages the power of Kubernetes Operators to automate complex workflows, i.e. deployment, bootstrapping, configuration, provisioning, scaling, upgrading, monitoring and resource management. It is integrated into the OpenShift storage dashboard to provide an instant view of the current object usage, alerts and resource allocations.

If object services are impacted, the Multi-Cloud Object Gateway Operator will actively perform healing and recovery as needed to ensure data is resilient and available to users. There is no need for the Administrator to enable healing operations, set up jobs to rebalance or redistribute the data, or even upgrade the storage services. For administrators concerned with automatic upgrades, the OpenShift Container Storage Operator can be configured to be manually upgraded to meet organizational maintenance policies or considerations, as well.

Object Provisioning Made Easy

OpenShift Container Storage supports persistent volume claims for block and file-based storage. In addition, it introduces the Object Bucket Claims (OBC) and Object Buckets (OB) concept, which takes inspiration from Persistent Volume Claims (PVC) and Persistent Volumes (PV).

A generic, dynamic bucket provisioning API, similar to Persistent Volumes and Persistent Volume Claims is introduced, so that users familiar with the PVC/PV model can handle bucket provisioning with a similar pattern.

Applications that require an object bucket will create an Object Bucket Claim (OBC) and refer to the object storage class name.

You can use oc to create the Object Bucket Claim:

$ oc create -f obc-test.yaml created

Use oc to confirm the Object Bucket and accompanying Object Bucket Claim is created:

$ oc get objectbucket


obc-test-obc-test obc-test Delete Bound 80s

After creating the Object Bucket Claim, the following Kubernetes resources would be created:

  • An Object Bucket which contains the bucket endpoint information, a reference to the Object Bucket Claim and a reference to the storage class.
  • A ConfigMap in the same namespace as the Object Bucket Claim, which contains connection information such as the endpoint host, port and bucket name, to be used by applications in order to consume the object service
  • A Secret in the same namespace as the OBC, which contains the access key and secret key needed to access the bucket.

This information can be used with environment variables. The following YAML is an example of a job using Object Bucket Claim and reading the information from the config map and secret into the environment variables:

kind: ObjectBucketClaim
name: "obc-test"
generateBucketName: "obc-test-noobaa"
apiVersion: batch/v1
kind: Job
name: obc-test
restartPolicy: OnFailure
- image:
name: obc-test
name: obc-test
name: obc-test
name: obc-test
name: obc-test
name: obc-test
value: "us-east-1"
- name: training-persistent-storage
mountPath: /data
- name: training-persistent-storage
emptyDir: {}

Security First

The Multi-Cloud Object Gateway provides multiple solutions for security concerns out of the box.

    1. Data encryption by default - every write operation chunked into multiple chunks and encrypted with a new key.
    2. Key management separation from data - all the keys are managed in a centralized location, separated from the encrypted chunks of data, regardless of the data location which can be in the cloud, on prem or a mixture for hybrid and multi-cloud deployments.
    3. Data isolation - every object bucket claim creates a new account, with new credentials permitted to access a new single bucket and create new buckets accessible only to this account, by default.

Resources and Feedback

To find out more about OpenShift Container Storage or to take a test drive, visit
If you would like to learn more about what the OpenShift Container Storage team is up to or provide feedback on any of the new 4.2 features, take this brief 3-minute survey.

About the author

Red Hatter since 2018, technology historian and founder of The Museum of Art and Digital Entertainment. Two decades of journalism mixed with technology expertise, storytelling and oodles of computing experience from inception to ewaste recycling. I have taught or had my work used in classes at USF, SFSU, AAU, UC Law Hastings and Harvard Law. 

I have worked with the EFF, Stanford, MIT, and to brief the US Copyright Office and change US copyright law. We won multiple exemptions to the DMCA, accepted and implemented by the Librarian of Congress. My writings have appeared in Wired, Bloomberg, Make Magazine, SD Times, The Austin American Statesman, The Atlanta Journal Constitution and many other outlets.

I have been written about by the Wall Street Journal, The Washington Post, Wired and The Atlantic. I have been called "The Gertrude Stein of Video Games," an honor I accept, as I live less than a mile from her childhood home in Oakland, CA. I was project lead on the first successful institutional preservation and rebooting of the first massively multiplayer game, Habitat, for the C64, from 1986: . I've consulted and collaborated with the NY MOMA, the Oakland Museum of California, Cisco, Semtech, Twilio, Game Developers Conference, NGNX, the Anti-Defamation League, the Library of Congress and the Oakland Public Library System on projects, contracts, and exhibitions.

Read full bio

Browse by channel

automation icon


The latest on IT automation for tech, teams, and environments

AI icon

Artificial intelligence

Updates on the platforms that free customers to run AI workloads anywhere

open hybrid cloud icon

Open hybrid cloud

Explore how we build a more flexible future with hybrid cloud

security icon


The latest on how we reduce risks across environments and technologies

edge icon

Edge computing

Updates on the platforms that simplify operations at the edge

Infrastructure icon


The latest on the world’s leading enterprise Linux platform

application development icon


Inside our solutions to the toughest application challenges

Original series icon

Original shows

Entertaining stories from the makers and leaders in enterprise tech