피드 구독

Self-service clusters with guardrails. CaaS operator provides an easy way to define clusters as templates and allows non-privileged developers or DevOps engineers to create instances of those templates.

Features

  • Deploys fully provisioned clusters: Gives your users a cluster they can immediately start being productive with, rather than an empty cluster.
  • GitOps-driven: All configurations are applied using ArgoCD.
  • Integrated with ACM: If the CaaS is on an ACM hub cluster, all the ACM management features will work seamlessly with the cluster installed using CaaS.
  • Adds additional guardrails: On top of classic k8s quotas, CaaS adds additional cost and count-based quotas, as well as the lifetime of your clusters.
  • Parameterizable: You can decide to let some of the aspects of the template be parameterizable.
  • Requires minimal permissions: No oc get pods or oc get secrets. Just one namespace, oc create ClusterTemplateInstance, oc get ClusterTemplate, oc get ClusterTemplateQuota, and oc get secret – all in one namespace. Nothing else.

How it works

Cluster as a Service using cluster templates

How to install

Prerequisites

  • Kubernetes cluster to run against.
  • HyperShift or Hive operator for cluster installation.

The easiest option is to use an OCP cluster with Multicluster Engine (MCE) installed on it. This way, you will get all the dependencies already prepared and configured.

Installation

On OCP cluster

Install the Cluster as a service operator from the OperatorHub:

  1. Go to the OCP console.
  2. Pick local-cluster in the top left corner.
  3. In the menu, select Operators -> OperatorHub.
  4. Search for Cluster as a service operator.
  5. Select it and hit the install button.

Note that ArgoCD is installed as a dependency of the operator.

Non OCP cluster

Please follow the instructions from the OperatorHub page.

ArgoCD configuration

As noted above, ArgoCD is installed as a dependency of the cluster as a service operator, but it's not configured to be used with the operator.

Please follow the ArgoCD configuration to set up ArgoCD.

How to use

Cluster as a service operator comes with a few ready-to-be-used templates.

HyperShift cluster without workers (not for production)

This template is not meant to be used in production. You can not run any workloads on it or scale it up. Only use this template to play with the CaaS and understand its concepts.

Prerequisites

HyperShift enabled and configured on your cluster:

  • If you have OCP + Multicluster Engine (MCE) installed on your cluster, follow these steps.
  • If you don't use OCP + Multicluster Engine (MCE), follow these steps.

Steps

1. Create a namespace clusters to store your clusters in:

kind: Namespace
apiVersion: v1
metadata:
  name: clusters
  labels:
  argocd.argoproj.io/managed-by: argocd

 

2. Create two secrets, one which contains the pull-secret and another one for the SSH public key:

kind: Secret
apiVersion: v1
metadata:
  name: pullsecret-cluster
  namespace: clusters
stringData:
  .dockerconfigjson: '<your_pull_secret>'
type: kubernetes.io/dockerconfigjson
---
apiVersion: v1
kind: Secret
metadata:
  name: sshkey-cluster
  namespace: clusters
stringData:
id_rsa.pub: <your_public_ssh_key>

 

3. Allow to create the template in this namespace:

apiVersion: clustertemplate.openshift.io/v1alpha1
kind: ClusterTemplateQuota
metadata:
  name: quota
  namespace: clusters
spec:
  allowedTemplates:
  - name: hypershift-cluster

 

4. Create an instance of the HyperShift template by creating the following YAML:

apiVersion: clustertemplate.openshift.io/v1alpha1
kind: ClusterTemplateInstance
metadata:
  name: hsclsempty
  namespace: clusters
spec:
clusterTemplateRef: hypershift-cluster

 

Check the cluster

Wait for the cluster to be ready. Please note the cluster will never actually import because it does not have any workers.

  • oc get ClusterTemplateInstance hsclsempty -n clusters
  • When the status.phase is Ready, you can log into the cluster.
  • The credentials are exposed as secrets and referenced from the status (kubeconfig, adminPassword, apiServerURL).

HyperShift cluster with kubevirt virtual machine workers

This will deploy an entirely self-contained cluster, meaning that both the control plane and workers will be running on the hub cluster. The control plane will use HyperShift, and the workers will run as virtual machines using kubevirt.

Prerequisites

You need HyperShift enabled and configured on your cluster:

  • If you have OCP + Multicluster Engine (MCE) installed on your cluster, follow these steps.
  • If you don't use OCP + Multicluster Engine (MCE), follow these steps.

Kubevirt enabled and configured on your cluster:

  • If you have OCP installed on your cluster, install OpenShift Virtualization from OperatorHub in your cluster.
  • If you don't use OCP, follow the instructions in the “install” part of operatorhub.io.

Steps

1. Create a namespace clusters to store your clusters in:

kind: Namespace
apiVersion: v1
metadata:
  name: clusters
  labels:
    argocd.argoproj.io/managed-by: argocd

 

2. Create two secrets, one which contains the pull-secret and another one for the SSH public key:

kind: Secret
apiVersion: v1
metadata:
  name: pullsecret-cluster
  namespace: clusters
stringData:
  .dockerconfigjson: '<your_pull_secret>'
type: kubernetes.io/dockerconfigjson
---
apiVersion: v1
kind: Secret
metadata:
  name: sshkey-cluster
  namespace: clusters
stringData:
id_rsa.pub: <your_public_ssh_key>

 

3. Allow to create the template in this namespace:

apiVersion: clustertemplate.openshift.io/v1alpha1
kind: ClusterTemplateQuota
metadata:
  name: quota
  namespace: clusters
spec:
  allowedTemplates:
  - name: hypershift-kubevirt-cluster

 

4. Create an instance of the HyperShift template by creating the following YAML:

apiVersion: clustertemplate.openshift.io/v1alpha1
kind: ClusterTemplateInstance
metadata:
  name: hsclskubevirt
  namespace: clusters
spec:
clusterTemplateRef: hypershift-kubevirt-cluster

 

Check the cluster

Wait for the cluster to be ready.

  • oc get ClusterTemplateInstance hsclsempty -n clusters
  • When the status.phase is Ready, you can log into the cluster.
  • The credentials are exposed as secrets and referenced from the status (kubeconfig, adminPassword, apiServerURL).

Documentation

Learn about CRDs

Permissions and env setup

License

Copyright 2022.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.


저자 소개

Tomas Jelinek is a Software Engineer at Red Hat with over seven years of experience with RHEL High Availability clusters.

Read full bio
UI_Icon-Red_Hat-Close-A-Black-RGB

채널별 검색

automation icon

오토메이션

기술, 팀, 인프라를 위한 IT 자동화 최신 동향

AI icon

인공지능

고객이 어디서나 AI 워크로드를 실행할 수 있도록 지원하는 플랫폼 업데이트

open hybrid cloud icon

오픈 하이브리드 클라우드

하이브리드 클라우드로 더욱 유연한 미래를 구축하는 방법을 알아보세요

security icon

보안

환경과 기술 전반에 걸쳐 리스크를 감소하는 방법에 대한 최신 정보

edge icon

엣지 컴퓨팅

엣지에서의 운영을 단순화하는 플랫폼 업데이트

Infrastructure icon

인프라

세계적으로 인정받은 기업용 Linux 플랫폼에 대한 최신 정보

application development icon

애플리케이션

복잡한 애플리케이션에 대한 솔루션 더 보기

Original series icon

오리지널 쇼

엔터프라이즈 기술 분야의 제작자와 리더가 전하는 흥미로운 스토리