The world of enterprise IT has seen a massive shift over the last decade as cloud computing has changed the way we work and do business. Today, microservices, application programming interfaces (APIs) and containers are the predominant approach to building, connecting and deploying applications, and Kubernetes has become the undisputed standard for managing them at scale in any environment.

These technologies are core to cloud-native application development, and emerged from the need for organizations to better match the speed of the world around them. The digital experience, delivered through software, has become one of the leading factors in competitive differentiation for companies today. Being able to rapidly respond to dynamic market conditions, incorporate user feedback, or deploy new products and features is crucial to success.

Integration plays a key role in enabling this agility, and has continued to evolve in ways that support faster, more responsive and more efficient application architectures. As the technologies have progressed, Red Hat has also continued to advance our integration portfolio in ways that we believe help customers meet their business needs in the best way possible. Our work on the open source Strimzi project is a good example of this. We believe Strimzi is the best solution for running Apache Kafka on Kubernetes, and continue to invest in strengthening the technology and community.

To that end, we are pleased to announce the latest release of Red Hat Integration, which introduces a number of new capabilities designed for today's Kubernetes-native, event-driven applications.

Red Hat Integration is a comprehensive set of integration and event processing technologies for creating, extending and deploying container-based integration services across hybrid and multicloud environments. As an agile, distributed and API-centric solution, Red Hat Integration enables organizations to connect and share data between applications and systems required in a digital world.

With the latest release, customers are able to:

  • Prevent runtime errors with Service Registry. One of the attractive features of Apache Kafka, particularly for event-driven architectures, is its speed; however, this is due in part to the brokers not inspecting the format of the data they move. This can cause problems if a publisher makes a change without telling a subscriber and it breaks processing. The service registry in Red Hat Integration is based on the open source Apicurio project and serves as a resource between publishers and subscribers that governs the data movement. These contracts provide visibility into the types of messages flowing through the system, and can be used to prevent runtime data errors. The service registry in Red Hat Integration also applies this to all API traffic as well.
  • Record real-time updates to changing data with change data capture, based on Debezium. In an event-driven architecture, applications and services are designed to respond to real-time information based on changes in business state. Change data capture enables transactional systems to automatically publish to the event-streaming backbone. Capturing real-time data changes not only enables greater data analysis and new use cases, but also addresses the issue of arbitrary changes made to a database that are not applied across the full application environment.
  • Improve availability and consistency of Apache Kafka clusters with MirrorMaker 2.0. MirrorMaker is an Apache Kafka component that is used to replicate streams of data between clusters within a datacenter or between multiple datacenters. Red Hat Integration now supports the latest version of MirrorMaker for Strimzi, which delivers a number of technical improvements over the previous version, such as bidirectional replication, topic configuration synchronization, and offset mapping, which makes it easier and faster to find messages in a target cluster if the source cluster has failed.
  • Deploy integration logic natively to Kubernetes with Apache Camel K (Technical Preview). Deploying integrations to containers historically required using a massive amount of memory and complicated YAML coding. These two factors could significantly slow down deployment and redeployment, and increase memory use—one of the largest impediments to container density on app nodes. Camel K addresses this by giving developers a productive integration logic in a simplified declaration syntax, and deploys natively (and thereby more efficiently) to Kubernetes. Camel K also provides a good connectivity option for serverless applications.

Learn about these features and more in the release notes. The latest version of Red Hat Integration is available now. Customers can get the latest updates from the Red Hat Customer Portal.