ブログを購読する

Being a cluster administrator can come with its own challenges, especially with environments that carry out-of-tree (OOT) cluster modules. Upgrading device plug-ins or different kernel versions can be prone to errors when doing so one-by-one. This is where the Kernel Module Management Operator (KMM) comes in, allowing admins to build, sign, and deploy multiple kernel versions for any kernel module.

KMM is designed to accommodate multiple kernel versions at once for any kernel module. Using this operator can also leverage the hardware acceleration capabilities of Intel Center GPU Flex, allowing for seamless node upgrades, faster application processing, and quicker module deployment.

Setting up KMM

KMM requires an already working OpenShift environment and a registry to push images to. KMM can be installed using OperatorHub in the OpenShift console or via the following kmm.yaml:

---
apiVersion: v1
kind: Namespace
metadata:
  name: openshift-kmm
---
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
  name: kernel-module-management
  namespace: openshift-kmm
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: kernel-module-management
  namespace: openshift-kmm
spec:
  channel: "stable"
  installPlanApproval: Automatic
  name: kernel-module-management
  source: redhat-operators
  sourceNamespace: openshift-marketplace

With:

oc apply -f kmm.yaml

Enabling hardware acceleration

Once installed, KMM can compile and install kernel module drivers for your hardware. Admins can then integrate with the Node Feature Discovery Operator (NFD), which detects hardware features on nodes and labels them for selector use later. NFD automatically adds labels to the nodes that present some characteristics, including if the node has a GPU and which GPU it has.

In using NFD labels, specific custom kernel versions can be targeted for your module deployment and enablement, so that only hosts with the required kernel and the required hardware are enabled for driver activation. This ensures that only compatible drivers are installed on nodes with a supported kernel, which is what makes KMM so valuable.

With NFD integration, KMM can more easily deploy Intel GPU kernels to the intended nodes, while leaving any other nodes unaffected. This process is detailed more in the Developers.redhat.com site:

Final thoughts

This is just one aspect of KMM and kernel modules that can be utilized to reduce the amount of effort required to manage updates in multiple nodes. KMM will let you handle out-of-tree kernel modules in a seamless fashion, until you can later incorporate your drivers upstream and include them in your distribution.

KMM is a community project, which you can test on upstream Kubernetes. There is also a Slack community channel where you can chat with fellow developers and experts about more ways to apply KMM to your own environment.


執筆者紹介

チャンネル別に見る

automation icon

自動化

テクノロジー、チームおよび環境に関する IT 自動化の最新情報

AI icon

AI (人工知能)

お客様が AI ワークロードをどこでも自由に実行することを可能にするプラットフォームについてのアップデート

open hybrid cloud icon

オープン・ハイブリッドクラウド

ハイブリッドクラウドで柔軟に未来を築く方法をご確認ください。

security icon

セキュリティ

環境やテクノロジー全体に及ぶリスクを軽減する方法に関する最新情報

edge icon

エッジコンピューティング

エッジでの運用を単純化するプラットフォームのアップデート

Infrastructure icon

インフラストラクチャ

世界有数のエンタープライズ向け Linux プラットフォームの最新情報

application development icon

アプリケーション

アプリケーションの最も困難な課題に対する Red Hat ソリューションの詳細

Original series icon

オリジナル番組

エンタープライズ向けテクノロジーのメーカーやリーダーによるストーリー