Contact us

Apache Kafka on Kubernetes for payments infrastructure

The need for more efficient payments infrastructure

Across the payments industry, organizations feel pressure to reduce costs. Evolving messaging standards, the growing need to support real-time processing, and accelerating transactional volumes only exacerbate an already difficult situation to contain costs. Existing technology is not helping either. Consequently, many payment organizations are reevaluating their current payments infrastructure to reduce both the complexity and cost of processing payments. They also want to gain the scalability and performance required to adapt to a world that is increasingly digital.

Excess infrastructure and cost

Virtualized infrastructure has made provisioning compute, storage, and network resources much easier. However, uncertainty over transaction volumes often drives teams to configure virtualized environments for projected peak volumes. While erring on the side of overprovisioning resources eliminates the risk of running out of capacity during an unexpected spike in demand, it can lead to unused capacity and high cost for nominal usage. In many instances, payment processors also run their messaging infrastructure in a hot/warm configuration because of challenges with distributed data replication. As a result, a significant portion of resources may sit idle during parts of the year, resulting in higher operating costs than what actual payment volumes necessitate.

Complex maintenance and upgrades

Many organizations have deployed traditional message brokers on top of virtualized infrastructure. They have also invested in some degree of automation to ease the setup and configuration of the messaging infrastructure, which typically includes a database for persisting messages and a cache to improve overall messaging throughput. Unfortunately, you still need to adjust and test the automation scripts associated with it, which tends to create a messaging infrastructure that is costly to operate and makes regular upgrades a painful process.

Cannot quickly adjust to changing volumes 

Even deploying a small configuration based on one of the more popular message-oriented middleware packages makes it possible to achieve performance of approximately 20,000 messages per second. The performance interplay between the database, the broker, and the caching components make vertical scaling the typical choice. However, this approach also increases the cost and effort to configure additional virtualized infrastructure, which ultimately makes it difficult to adjust quickly as transaction volumes fluctuate.

Difficult to maintain service levels with typical maintenance

The challenges with traditional virtualized messaging infrastructure go beyond unused capacity and higher operating costs. Service availability for traditional brokers can also be affected when the time comes to perform maintenance because upgrades and patches typically require downtime. Capitalizing on availability of their local clearing house can help payment processors to clear and settle payments in real time. Taking the messaging infrastructure offline becomes more of a problem for organizations seeking to capitalize on today's 24x7 access to the clearing house. The need for uptime becomes even more pronounced as customers become accustomed to this faster payments scheme.

Economically scale to meet real-time processing demands

Advancements in cloud technology, kubernetes container platforms, and integration from Red Hat can dramatically improve infrastructure use and lower the operating cost for the messaging infrastructure that supports your payment organization. The elasticity of the cloud means additional capacity is automatically available based on resource consumption or messaging volume. Excess capacity is also automatically removed. This inherent scalability eliminates the need for overprovisioning. 

Compute, storage, and network advances from Intel also make it possible to process more transactions per hour without having to invest in new hardware. By tapping into the innovation of open source communities coupled with high-performance hardware, Red Hat and Intel can help you deploy a messaging infrastructure that provides the needed throughput that supports real-time processing–reducing the cost to run your payments infrastructure.

Red Hat, a company that is trusted to support the world’s leading firms most critical services,2 provides all the components (Figure 1) needed to adopt Apache Kafka on Kubernetes. These capabilities are engineered in leading open source communities, such as Apache Kafka, Strimzi, Kubernetes, Ceph®, Istio, Apache Camel, and Prometheus. Intel complements these communities with advanced compute and storage infrastructure so that you can achieve twice the performance over previous generations of processors.3 Red Hat has been working with Intel for 25 years to ensure that our joint solutions meet the most rigorous performance and stability requirements of financial institutions.

image container Figure 1: Apache Kafka on Kubernetes with Red Hat and Intel

Simple and fast streaming for payments

As one of the original developers of Apache Kafka, LinkedIn has used the software to scale its messaging infrastructure and achieve a processing capacity of 7 trillion events per day in real time.4 Uber also uses Kafka to achieve processing of over one trillion events per day in real time.5 Payment organizations benefit from the ongoing performance enhancements contributed to the Apache Kafka community from these users and others.6 As a result, you can tap into the collective innovation across the globe that is making simple and fast streaming infrastructure.

Apache Kafka was created with the separation of message sources and consumers in mind. This decoupling makes it easy to adopt a publish-and-subscribe model, enabling you to simplify the integration between systems in a fault-tolerant manner. It also supports long-term retention and immediate access to data so that you can quickly meet audit requirements. In addition, events are replayable—a useful capability when you need process messages again or perform forensics to identify a service or security issue. Message partitioning further allows organizing data to enable maximum concurrent access. 


High-performance storage becomes essential to achieving these types of results because Apache Kafka writes to disk for messages. Apache Kafka with Intel Optane can write data to disk 6 times faster than with traditional storage1, ensuring that you can break the boundaries of existing storage technology. Unlocking this type of performance requires all parts of the stack to work together uninterrupted. Red Hat and Intel work together to optimize the software and hardware components so that you can get performance needed for real-time processing.

Apache Kafka is included within the Red Hat® AMQ component of Red Hat Integration.

comes with built-in operators for Red Hat OpenShift®.7 This automates the creation, configuration, and management of instances to fully automate the initial setup and upgrade of your messaging infrastructure. This easy, built-in automation reduces operational cost and burdens on support staff, making your organization more efficient. 

You also have access to Quarkus, a reactive framework tailored for event-driven producers and consumers. This runtime is optimized for GraalVM and OpenJDK and is based on popular libraries and standards within the JavaTM community. Quarkus reduces memory consumption by 277% while improving throughput by 38.4% with amazingly fast startup times.7

A cloud platform made for high-volume payments

Red Hat initiated the Kubernetes resource management working group to address the performance-sensitive needs of its customers. With Red Hat, you have access to the leaders in the community who are tackling the most difficult performance challenges in the software industry. The result is a cloud platform that uses compute, storage, and network resources efficiently so that you can get the most out of your investment.

Achieving real-time processing at a lower cost with Intel Xeon processors means that you can tap into future improvements in processor design. With support for higher memory speeds and enhanced memory capacity and optimal scalability, these processors deliver significant improvements in performance, reliability, and hardware-enhanced security. They are optimized for demanding cloud computing, network, and storage workloads with scalability built in. Meanwhile, Intel Ethernet products meet the challenges of real-time payment analytics and massive data. They accelerate high-priority applications, packet processing, and latency-sensitive workloads.

It is not just performance that is required. Intel AES New Instructions (Intel AES-NI) is a new encryption instruction set that improves on the Advanced Encryption Standard (AES) algorithm. It accelerates the encryption of data in the Intel Xeon processor family and the Intel Core processor family. This on-chip technology enables fast and secure data encryption and decryption. It is ideally suited to optimize the compute resources required to encrypt and decrypt sensitive payment information without increasing latency or throttling performance and throughput.

Intel Software Guard Extensions (SGX) lets you take advantage of new hardware-based controls for cloud infrastructure. It offers hardware-based memory encryption that isolates specific application code and data in memory. SGX also allows user-level code to allocate private regions of memory, called enclaves, which are protected from processes running at higher privilege levels. 

Intel Optane is an innovative technology that delivers persistent memory, large memory pools, fast caching, and fast storage. Unlike storage built on NAND, this fast, nonvolatile memory architecture allows memory cells to be individually addressed in a dense, transistor-less, stackable design. The result is persistent memory, large memory pools, fast caching, and fast storage, which deliver increased overall performance—even in the most dynamic real-time processing environments. Red Hat OpenShift Container Storage is built-in software-defined storage for containers. It provides complete, persistent storage and data portability across any cloud infrastructure. Importantly, it unlocks the performance benefits of Intel Optane within Red Hat OpenShift.


Traditional compute, storage, and messaging platforms cannot easily keep pace with the demands for efficient real-time processing in an increasingly digital world. Red Hat and Intel provide the capabilities needed to adopt cloud-native messaging as part of the modernization of your payments infrastructure so that you can become more effective and adaptive.

With Red Hat, you have access to award-winning training, support, and a growing ecosystem of partners to help you to modernize your payments infrastructure. Red Hat’s long-standing partnership with Intel means you can also maximize performance and optimize cost so that you can thrive in the payments industry. Go to to learn more about how Red Hat and Intel can help you successfully modernize your payments infrastructure.

Top 10 reasons to deploy Intel Optane technology in the data center.” Intel, Accessed 21 May 2020,

Red Hat client data and Fortune 500 list for 2019

High-performance, scalable data center products.” Intel, Accessed 21 May 2020.

Lee, Jon and Wesley Wu. “How LinkedIn customizes Apache Kafka for 7 trillion messages per day.” LinkedIn Engineering,

Bansal, Ankur and Mingmin Chen. “How Uber scaled its real time infrastructure to trillion events per day.” LinkedIn SlideShare, 14 Jun. 2017,

Apache Kafka powered by.” Apache Kafka, Accessed 21 May 2020,

O’Hara, John. “Quarkus runtime performance.” Quarkus, 7 July 2019,