This post was originally written by Markus Eisele in his blog: http://blog.eisele.net/2015/07/scaling-and-load-balancing-wildfly-with-fabric8-openshift.html
The first two parts of this little series introduced you build a tiny JAX-RS service with WildFly Swarm and package it into a Docker image. You learned how to deploy this example to OpenShift and now it is time to scale it up a bit.
Why Scaling Is Important
One of the key aspects of microservices based architectures is decomposition into highly performant individual services which scale on demand and technically easy. Applications are now being built to scale and infrastructure is transparently assisting where necessary.
Java EE developers have done this a lot in the past with the standard horizontal scaling by putting more physical boxes next to each other, or limit vertical scaling by spinning up more instances on the same host.
Microservices allow for different scaling approaches. A much more complete definition of the different variations for scaling can be found in the book The Art Of Scalability.
I'm going to dig into different approaches with future blog-posts. To make this intro into scaling a little bit easier, today we're going to scale our tiny little app vertically by spinning up more pods for it.
What Is A Pod
A pod (as in a pod of whales or pea pod) is a Kubernetes object which corresponds to a colocated group of applications running with a shared context.
In terms of Docker constructs, a pod consists of a colocated group of Docker containers with shared volumes. In a pre-container world, they would have executed on the same physical or virtual host. So, that's what we want to scale in this example. The pod, that is already running.
What Did We Do So Far?
When you first deployed the JAX-RS example, OpenShift created a bunch of resources. Namely:
- Imagestream
An image stream is similar to a Docker image repository in that it contains one or more Docker images identified by tags. OpenShift stores complete metadata about each image (e.g., command, entrypoint, environment variables, etc.). Images in OpenShift are immutable. OpenShift components such as builds and deployments can watch an image stream and receive notifications when new images are added, reacting by performing a build or a deployment, for example. - Service
A Kubernetes service serves as an internal load balancer. It identifies a set of replicated pods in order to proxy the connections it receives to them. - DeploymentConfig
Building on replication controllers, OpenShift adds expanded support for the software development and deployment lifecycle with the concept of deployments. OpenShift deployments also provide the ability to transition from an existing deployment of an image to a new one and also define hooks to be run before or after creating the replication controller.
So, a service proxies our request to the pods, and a deploymentconfig is build on top of the Kubernetes replication controller, which controls the number of pods. We're getting closer!
Scale My Microservice now, please!
Just wait a second. While services provide routing and load balancing for pods which may blink in and out of existence, ReplicationControllers (RC) are used to specify and enforce the number of pods (replicas) that should be in existence.
RCs can be thought of to live at the same level as Services but they provide different functionality above pods. RCs are a Kubernetes object. OpenShift provides a “wrapper” object on top of the RC called a Deployment Configuration (DC). DCs not only include the RC but they also allow you to define how transitions between images occur as well as postdeploy hooks and other deployment actions.
We finally know where to look at. Let's seem what the DeploymentConfig looks like, that we created when we started our swarm-sample image.
<code>oc get dc swarm-sample</code>
NAME TRIGGERS LATEST VERSION
swarm-sample ConfigChange, ImageChange 1
Even though RCs control the scaling of the pods, they are wrapped in a higher construct, DeploymentConfig, which also manages when, where, and how these Pods/RCs will be deployed. We can still see the underlying RC: (note: output truncated)
<code>oc get rc swarm-sample-1</code>
CONTROLLER CONTAINER(S) IMAGE(S) REPLICAS
swarm-sample-1 swarm-sample 172.30.101.151:5000/myfear/swarm-sample@sha256:[...] 1
And now we need to know if whatever scaling we're going to do is actually working. I did push a little curl script, which outputs the result from the JAX-RS endpoint and sleeps for 2 seconds before it is requesting the output again.
Start it up and watch the result returning the same hostname environment variable all over until you execute the following command:
<code>oc scale dc swarm-sample --replicas=3</code>
Now everything changes and after a while you see three different hostnames being returned. It might take a while (depending on your machine and how quickly OpenShift can spin up the new pods.You can also see the change in the admin console, where three pods are now displayed.
We can revert the behavior by setting the replicas count back to 1.
<code>oc scale dc swarm-sample --replicas=1</code>
That was easy. And not exactly considered best-practice. Because all of the pods share the same context, they should never run on the same physical machine. Instead, it would be better to run a complete microservice (frontend, backend, database) on three pods within the same RC.
But this is a topic for more blog-posts to come. Now you learned how to scale pods on OpenShift and we can continue to evolve our example application further and do more scaling examples later.
Author
Markus Eisele
Developer Advocate
JBoss Middleware at Red Hat
@myfear
http://blog.eisele.net/
執筆者紹介
Markus Eisele is a Red Hat Developer Tools Marketing Lead at Red Hat. He is also a JavaTM Champion, former Java EE Expert Group member, founder of German JavaLand and a speaker at Java conferences around the world.
チャンネル別に見る
自動化
テクノロジー、チームおよび環境に関する IT 自動化の最新情報
AI (人工知能)
お客様が AI ワークロードをどこでも自由に実行することを可能にするプラットフォームについてのアップデート
オープン・ハイブリッドクラウド
ハイブリッドクラウドで柔軟に未来を築く方法をご確認ください。
セキュリティ
環境やテクノロジー全体に及ぶリスクを軽減する方法に関する最新情報
エッジコンピューティング
エッジでの運用を単純化するプラットフォームのアップデート
インフラストラクチャ
世界有数のエンタープライズ向け Linux プラットフォームの最新情報
アプリケーション
アプリケーションの最も困難な課題に対する Red Hat ソリューションの詳細
オリジナル番組
エンタープライズ向けテクノロジーのメーカーやリーダーによるストーリー
製品
ツール
試用、購入、販売
コミュニケーション
Red Hat について
エンタープライズ・オープンソース・ソリューションのプロバイダーとして世界をリードする Red Hat は、Linux、クラウド、コンテナ、Kubernetes などのテクノロジーを提供しています。Red Hat は強化されたソリューションを提供し、コアデータセンターからネットワークエッジまで、企業が複数のプラットフォームおよび環境間で容易に運用できるようにしています。
言語を選択してください
Red Hat legal and privacy links
- Red Hat について
- 採用情報
- イベント
- 各国のオフィス
- Red Hat へのお問い合わせ
- Red Hat ブログ
- ダイバーシティ、エクイティ、およびインクルージョン
- Cool Stuff Store
- Red Hat Summit