Working with continuous streams of data is an essential part of modern enterprise architecture. Today, with the proliferation of Internet of Things (IoT) devices and streaming platforms such as Spotify, YouTube, and fintech reporting applications, the number of applications that produce and consume streamed messages has grown by orders of magnitude.
Streaming messages continuously from a central source is an approach to architectural design that is fundamentally different from synchronous websites and client-server apps that submit and receive data one request at a time. Message-driven applications can respond to data in an ongoing manner.
You can think of message streams as analogous to a radio signal. A radio station is always broadcasting, and it's up to the listener to tune in. There is no back and forth. The direction of the message stream is one way, either into the broadcaster to be forwarded to interested listeners or out of the broadcaster for listening.
The same is true of message streams. A streaming server is either receiving messages to emit or emitting messages. An application that's interested in the messages broadcasted from a streaming server must tune in to get the data being emitted.
Processing messages emitted from a stream has been part of application design going back to the days of the mainframe. Back then, working with message streams was a specialized skill. Architects needed to accommodate a certain amount of complexity just to emit a simple message for downstream consumption. Some architects had the skill, many didn't.
Fortunately, things have changed. Today, producing and consuming messages are commonplace, and a number of labor-saving tools and technologies have emerged. One such technology is Apache Kafka.
[ Free online course: Developing cloud-native applications with microservices architectures. ]
Messaging with Apache Kafka
Apache Kafka is a distributed, open source messaging technology that can accept, record, and publish messages at a very large scale, in excess of a million messages per second. Apache Kafka is fast, and it's very reliable. It is designed and intended to be used at web scale.
Apache Kafka ships with a command-line interface (CLI) tool that enables developers to publish and consume messages in a terminal window. However, while working with the CLI is useful for development and experimentation purposes, working at a terminal window doesn't scale to meet the needs of the enterprise architect. There is no way a human being can process millions of messages at a time. This type of work needs to be automated. This is where language specific-clients come into play.
Clients are available for languages including Java, Node.js, C#, and Go. These clients do all the heavy lifting to connect an application to an Apache Kafka server to publish and consume messages safely and in an orderly manner. Once the connection is in force, the developer just needs to create the logic to emit or consume messages to and from Apache Kafka for the use case at hand.
It's still not the type of programming done by a novice developer, but the labor involved is nowhere near the amount of work that experienced developers would need to do if they started from scratch. Using a language-specific client can accelerate a message-driven architecture implementation using Kafka significantly, on the order of weeks instead of months.
If you want to learn more about the details of Kafka in general and particularly about programming with Kafka using the Java client, take a look at A developer's guide to using Kafka with Java, Part 1 on Red Hat Developer blog.