Editor's note: this is part three of Mike Piech's Open Outlook: Middleware series.
In the first part of this series, we looked at the role of containers as a fundamental enabler of fine-grained, microservices architectures that enable rapid, incremental, trial-and-error innovation. In the second part, we described in some detail the continuing importance of "middleware"--whether it's called middleware or something else--for development of enterprise applications in containerized, cloud-native environments. We arrived at the notion that not only must traditional middleware be substantially reimagined and refactored to optimally support cloud-native applications, it can also be substantially more powerful when it is "engineered together" in a way that creates a unified, coherent application environment. Let's unpack this a bit and understand the opportunities, benefits, and requirements.
Clear menu choice vs. researching and downloading
One of the clearest benefits of building in a cloud environment (we'll keep the idea more general than specifically cloud-native for the moment) is that the developer is typically working at least part of the time in a web-based user interface where pre-built components that can be included or called (i.e., middleware) is presented in menus, drop-down lists, etc. So the developer can dive into the development session or initiative with a fairly vague idea of what's needed and discover pre-built components as she goes along, sometimes not even knowing up front that a given type of component already exists. Even when using a command-line interface (CLI) in a cloud environment, there are often commands for listing available components, so this approach is not strictly limited to web-based graphical interaction.
This menu-driven, incremental discovery style of development is in marked contrast to traditional development, where a developer had to know in advance what middleware was available, choose what he needed, download it, install it, configure it, and wire it together with other middleware and/or his own code. Because there was so much overhead in the process of acquiring and setting up each pre-built middleware component, the middleware tended to come in bigger chunks, dragging along more functionality than was typically needed. This was a worthwhile trade-off--setting up fewer, larger pieces rather than wiring together many more small pieces.
The ease with which fine-grained components can be discovered and incorporated into a development project in a cloud environment, along with the ease with which new components can be made available in the environment, clearly make for a much more rapidly evolving environment, enabling innovations to flourish with blinding speed.
The risks of explosive proliferation
The downside, however, is that even though the cloud machinery hides, standardizes, and automates much of the middleware setup, the explosive proliferation of the components and services in a given environment rapidly outstrips most organizations' ability to test and debug every possible combination that might be incorporated into a given application. While the cloud machinery might have been set up to automate the configuration of each component on its own, the developer is often still left figuring out how to get them working together.
This is why a cloud platform that has a superficially impressive laundry list of services in its catalog might not live up to its promise. The ease with which components can be linked into an application via the web dashboard or CLI can be deceptive. Underlying incompatibilities in the particular chosen set might not come to the surface until long after the initial selection and then become a nightmare to debug.
Harness proliferation with great engineering
Which brings us to the notion of "engineered together." If the middleware built for a particular cloud platform is organized in a way where there is a common set of standards and practices by which individual services and components get developed (ideally in open source communities so as to take maximum advantage of the innovation happening there), unit-tested, integration-tested, and productized, a substantial portion of this risk can be mitigated. By thinking of, designing, and testing individual components in a hierarchical structure of combination, many of the most common patterns and use cases can be effectively covered.
This takes a lot of effort both in design and productization practice, but it is what it takes to provide a truly unified and cohesive environment where the DevOps productivity and agility benefits of the cloud can be provided in an enterprise-viable platform.
These three related notes look in more detail at the runtimes, integration, and process automation areas within Red Hat Middleware. In a future note, we’ll look in more depth at a few case studies of how customers benefited from the engineered-together combination of Red Hat Middleware technologies on OpenShift.
執筆者紹介
Imaginative but reality-grounded product exec with a passion for surfacing the relevant essence of complex technology. Strong technical understanding complemented by ability to explain, excite, and lead. Driven toward challenge and the unknown.
チャンネル別に見る
自動化
テクノロジー、チームおよび環境に関する IT 自動化の最新情報
AI (人工知能)
お客様が AI ワークロードをどこでも自由に実行することを可能にするプラットフォームについてのアップデート
オープン・ハイブリッドクラウド
ハイブリッドクラウドで柔軟に未来を築く方法をご確認ください。
セキュリティ
環境やテクノロジー全体に及ぶリスクを軽減する方法に関する最新情報
エッジコンピューティング
エッジでの運用を単純化するプラットフォームのアップデート
インフラストラクチャ
世界有数のエンタープライズ向け Linux プラットフォームの最新情報
アプリケーション
アプリケーションの最も困難な課題に対する Red Hat ソリューションの詳細
オリジナル番組
エンタープライズ向けテクノロジーのメーカーやリーダーによるストーリー
製品
ツール
試用、購入、販売
コミュニケーション
Red Hat について
エンタープライズ・オープンソース・ソリューションのプロバイダーとして世界をリードする Red Hat は、Linux、クラウド、コンテナ、Kubernetes などのテクノロジーを提供しています。Red Hat は強化されたソリューションを提供し、コアデータセンターからネットワークエッジまで、企業が複数のプラットフォームおよび環境間で容易に運用できるようにしています。
言語を選択してください
Red Hat legal and privacy links
- Red Hat について
- 採用情報
- イベント
- 各国のオフィス
- Red Hat へのお問い合わせ
- Red Hat ブログ
- ダイバーシティ、エクイティ、およびインクルージョン
- Cool Stuff Store
- Red Hat Summit