Inscreva-se no nosso blog

This is a guest post by Madhur Nawandar and Prashanto Kochavara of Trilio.

Trilio is a native data protection solution within the Open Hybrid Cloud landscape. Over a year ago, we started fielding requests from our customers and prospects to protect both their Kubernetes-based applications in their existing on-prem environments, and their Kubernetes-based applications residing in the public cloud.  

When we decided to take on the cloud-native challenge, Trilio had to make the important strategic decision of how to package our data-protection solution. While Kubernetes provides concepts like statefulsets, replicasets, and daemonsets, a more programmatic way of packaging and deploying the product was needed. 

Helm was one option that was being heavily used in Kubernetes environments for packaging applications. It also has a huge developer community along with backing from major enterprises, which provided strong justification. However, while building our approach, Red Hat briefed us on a new technology known as Operators, which was developed by CoreOS (Red Hat acquired CoreOS in 2018). We learned how Operators simplify the lifecycle management of an application and the benefits they provide for customers in terms of management. 

Technically, since our architecture had decoupled the application from the operator, adopting to OLM was easy. The Operators Lifecycle Manager (OLM) framework that is available to develop and build operators as well as the catalog-based application management were extremely attractive and aligned with Trilio’s strategy and objective for providing a simple self-service approach for data protection in a Kubernetes environment. As a result, we decided to leverage both deployment models to serve our customers and provide them with options.

As part of its Upstream Operator, Trilio offers a single Custom Resource Definition (CRD) that enables customers to install the correct version of the application and update it when newer versions of the software are released. As part of our modular design which the Operator code and the Application code separate, Trilio was able to take full advantage of the OLM framework by exposing our application CRDs directly to provide a better customer experience. With this approach, for upstream environments, Trilio would leverage our Upstream Operator. For OLM-based environments, like OpenShift, the OLM framework would be doing the job of our Operator and serve the following functions: 

  • Managing installation
  • Managing the lifecycle of the application
  • Application availability

With this two-pronged strategy, we were able to publish the application CRDs that customers would leverage to “operate” our new cloud-native data-protection product called TrilioVault for Kubernetes. This approach also gave us the opportunity to provide our customers with an integrated experience to manage Trilio’s custom resources. This enables a single UI for the customer to not only manage applications, but also to manage their data protection. While this strategy of exposing CRDs was more time consuming for Trilio, it was the correct route for our customers and reflected Kubernetes design principles.

One of the best aspects of the OLM framework that customers will enjoy is how updates are delivered to the TrilioVault for Kubernetes application. Not only are Role-Based Access Control (RBAC) policies adhered to (by allowing only cluster-admin role manage operators), but the user also has the ability to set the approval policy for the updates as “automatic” or “manual.” This process of delivering updates to customers is completely automated and managed through the Operator certification program. 

Overall, the tools and engines provided by Red Hat in terms of development and delivery of an Operator made the entire process painless. Operator Software Development Kit (SDK) provides a CLI-based tool to create, build, and deploy the Operator. The CLI was extremely valuable in terms of efficiency and saving time during development.

Running validation tests locally also helped with certifications. The Operator certification process was not only valuable in ensuring we used best practices from Red Hat, but also helped in terms of validating our code from a customer’s perspective. The entire customer journey from OperatorHublisting to install and deployment experience has been tested. The Red Hat team was great to work with. Their attention to detail was on display when they ensured the metadata and cosmetics around the Operator were styled correctly.
TrilioVault for Kubernetes has been launched for Early Access with a Red Hat OpenShift Certified Operator, but our journey has just begun. Today, TrilioVault offers basic install, seamless upgrades, and lifecycle management capabilities for its Operator. Going forward, we’ll be focusing on metrics and monitoring to provide data intelligence and achieving massive scale and parallelism through deep insights and autopilot capabilities while providing superior and innovative data-protection features.

You can get your hands on our operator directly within OpenShift Embedded OperatorHub. 

You can also watch videos and test-drive TrilioVault for Kubernetes directly from the Trilio Website

Sobre o autor

Red Hatter since 2018, technology historian and founder of The Museum of Art and Digital Entertainment. Two decades of journalism mixed with technology expertise, storytelling and oodles of computing experience from inception to ewaste recycling. I have taught or had my work used in classes at USF, SFSU, AAU, UC Law Hastings and Harvard Law. 

I have worked with the EFF, Stanford, MIT, and to brief the US Copyright Office and change US copyright law. We won multiple exemptions to the DMCA, accepted and implemented by the Librarian of Congress. My writings have appeared in Wired, Bloomberg, Make Magazine, SD Times, The Austin American Statesman, The Atlanta Journal Constitution and many other outlets.

I have been written about by the Wall Street Journal, The Washington Post, Wired and The Atlantic. I have been called "The Gertrude Stein of Video Games," an honor I accept, as I live less than a mile from her childhood home in Oakland, CA. I was project lead on the first successful institutional preservation and rebooting of the first massively multiplayer game, Habitat, for the C64, from 1986: . I've consulted and collaborated with the NY MOMA, the Oakland Museum of California, Cisco, Semtech, Twilio, Game Developers Conference, NGNX, the Anti-Defamation League, the Library of Congress and the Oakland Public Library System on projects, contracts, and exhibitions.

Read full bio

Navegue por canal

automation icon


Últimas novidades em automação de TI para empresas de tecnologia, equipes e ambientes

AI icon

Inteligência artificial

Descubra as atualizações nas plataformas que proporcionam aos clientes executar suas cargas de trabalho de IA em qualquer ambiente

open hybrid cloud icon

Nuvem híbrida aberta

Veja como construímos um futuro mais flexível com a nuvem híbrida

security icon


Veja as últimas novidades sobre como reduzimos riscos em ambientes e tecnologias

edge icon

Edge computing

Saiba quais são as atualizações nas plataformas que simplificam as operações na borda

Infrastructure icon


Saiba o que há de mais recente na plataforma Linux empresarial líder mundial

application development icon


Conheça nossas soluções desenvolvidas para ajudar você a superar os desafios mais complexos de aplicações

Original series icon

Programas originais

Veja as histórias divertidas de criadores e líderes em tecnologia empresarial