Yesterday, the Kubernetes Product Security team released information about two significant bugs in Kubernetes, which were assigned CVE-2017-1002101 and CVE-2017-1002102. OpenShift is built upon Kubernetes and as such these bugs were also present in both OpenShift Online and OpenShift Dedicated. Red Hat, along with Google and other members of the Cloud Native Computing Foundation, worked to create and coordinate the release of security fixes for these affected products.
In response to these security errata, at the time the embargo was lifted, the OpenShift SRE team worked around the clock, across three geographic regions (NASA, APAC, and EMEA) to remediate the bug on all affected clusters.
Starting at approximately noon Eastern (16:00 UTC) on Monday, March 12th remediation began with internal and test clusters prior to any updates being made in production. The usual and customary tooling for updates had been modified ahead of time in response to a prior incident post-mortem that allowed it to handle the unusual nature of the patch. Instead of being applied as a system errata, for most clusters, the individual OpenShift components were upgraded in place and restarted. For a small number of starter tier clusters, an automated product upgrade was performed to remediate and upgrade simultaneously.
All externally exposed, production clusters were remediated by 12:30 Eastern (16:30 UTC) on Tuesday, March 13th. Due to the nature of the global SRE team, all OpenShift Dedicated clusters were patched during the customer’s preferred (typically overnight for that region) maintenance window.
A small number of nodes saw isolated outages as other issues came to light, but in the vast majority of cases, no reboots or node outages were required.
As is always the case, the focus was on patching public (starter- and pro- tiers in OpenShift Online) clusters ahead of non-public (all clusters in OpenShift Dedicated) due to the increased attack surface.
The remediation process was entirely automated, including raising and lowering customer notification banners. This ensured that even though the remediation was performed on an accelerated timeline, customers were always kept informed about the status and progress. Additionally, to aid collaboration between team members, removing and re-adding cluster nodes from our maintenance systems was automated, avoiding false alerts during the process.
Some of the key elements that created the ability to respond so quickly include factors like:
- Our remediation automation, as well as installer automation, is written with the same tool, Ansible. This allows significant re-use of code and sharing of expertise within the team, and seamlessly scales with the environment.
- Pre-written automation tools are created with enough flexibility to handle routine and non-routine remediation requirements.
- Collaboration tools including screen-sharing, video conferencing to allow SRE team members across the globe to work simultaneously, and hand-off between regions to “follow the sun”.
- Rigorous and detailed post-mortems are held after every successive remediation effort allow us to mature and enhance automated tooling. We will always have unexpected events that cause us to revert to manual processing, but we rarely have them more than once.
For more information, please refer to:
Red Hat OpenShift SRE Team
執筆者紹介
類似検索
Ford's keyless strategy for managing 200+ Red Hat OpenShift clusters
F5 BIG-IP Virtual Edition is now validated for Red Hat OpenShift Virtualization
Can Kubernetes Help People Find Love? | Compiler
Scaling For Complexity With Container Adoption | Code Comments
チャンネル別に見る
自動化
テクノロジー、チームおよび環境に関する IT 自動化の最新情報
AI (人工知能)
お客様が AI ワークロードをどこでも自由に実行することを可能にするプラットフォームについてのアップデート
オープン・ハイブリッドクラウド
ハイブリッドクラウドで柔軟に未来を築く方法をご確認ください。
セキュリティ
環境やテクノロジー全体に及ぶリスクを軽減する方法に関する最新情報
エッジコンピューティング
エッジでの運用を単純化するプラットフォームのアップデート
インフラストラクチャ
世界有数のエンタープライズ向け Linux プラットフォームの最新情報
アプリケーション
アプリケーションの最も困難な課題に対する Red Hat ソリューションの詳細
仮想化
オンプレミスまたは複数クラウドでのワークロードに対応するエンタープライズ仮想化の将来についてご覧ください