The three previous posts in this series have been focused on getting your OpenShift cluster deployed and prepared to host, scale, and manage applications. This post will focus on the tasks relevant to your users. Things like RBAC, delegating permissions, and maximizing the application’s experience when deployed to OpenShift are all critically important. Let us get started by looking at how to prepare for users to connect, use, and consume resources.
Preparing for Users
Now that the cluster is deployed and the core services and capabilities have been configured, you should evaluate the features for developers and applications and configure according to your needs.
- Configure user authentication / identity provider(s).
- Configure user authorization / RBAC.
- After configuring administrator accounts, remove the kubeadmin user.
- Add (or block) additional Registries.
- Browse the operator store and deploy/configure any needed.
- Customize the default project template.
- Add default quota(s) and LimitRange(s) to prevent overconsumption by any one project.
- Configure default node selectors for the cluster or for projects.
- Decide if you want to deploy the Samples Operator.
At this point, you have configured and prepared the OpenShift deployment for workload. If you are moving applications from OpenShift 3 to OpenShift 4, the documentation provides guidance for using the Cluster Application Migration tool to simplify the process, making it an easier transition between old and new.
With everything configured and applications being deployed, it is important to work with the developer and applications teams to maximize their experience and value from OpenShift.
Making the Most of Your Deployment
Applications are the reason you are deploying OpenShift, so it is important to ensure that they are getting the resources they need and making the most of the features and capabilities available to them.
As a cluster administrator, you almost certainly have a different set of priorities and things you care about beyond what the application teams and developers are interested in. I would encourage you to work with those teams to help them utilize Kubernetes, and OpenShift, to the maximum potential. A few examples include:
- Introducing them to the developer console, including how to monitor and view metrics for their projects, and the odo developer CLI.
- Educating them on the efficient use of resources, such as:
- Understanding how node resources are allocated to pods.
- Configuring and using pod priorities and pod disruption budgets to avoid applications accidentally going offline during node drain operations, such as during an OpenShift update/upgrade.
- Pod requests and limits
- Discussing and deciding what thresholds are appropriate to use with the horizontal pod autoscaling feature.
- Working together to ensure that cluster resources and application needs are aligned, including:
- Assessing memory and risk requirements.
- Limiting bandwidth available to pods.
- Defining critical pods if they cannot be evicted.
- Helping them to understand builds and build strategies, along with Images and ImageStreams.
- Determining whether Pipelines will provide value to your application and development practices.
- Using Operators, whether Go, Helm, or Ansible-based, to simplify common administrative tasks and/or deploying and updating your organization’s applications.
- Performing some basic housekeeping tasks on a regular basis, including:
- Staying up to date with OpenShift releases, both micro and minor versions
- Performing object pruning periodically.
Reviewing and analyzing cluster metrics and alerting and determining if any action needs to be taken, such as adjusting node count or size and/or pod requests and limits.
Ready for Success
Well, this is quite the milestone! The cluster is configured, developers and application teams are using the resources, applications are deployed and doing their job. You’re done, right? Time to get some coffee and relax!
But, after that coffee break, it is back to reality. OpenShift’s use of Operators means that many common tasks are already automated, only needing you to monitor and verify what is happening. However, that does not mean the operations team can ignore OpenShift! At a minimum, you will want to regularly review the OpenShift lifecycle and interoperability matrix, and, of course, keep up with changes, enhancements, and security fixes by reviewing release notes and applying updates.
This series of posts has covered the breadth of OpenShift, from planning and deploying, to configuring core services and customizing the cluster for your needs. This final entry explored configuration relevant to managing user access and application resources, but that does not mean it is the end of the journey. Staying up to date and continuous learning about OpenShift are important, so I would encourage you to visit our Twitch live stream during one of our scheduled sessions. We also have regularly updated content available on the OpenShift YouTube channel, including hosting the OpenShift Commons sessions!
Sobre el autor
Más como éste
Looking ahead to 2026: Red Hat’s view across the hybrid cloud
Red Hat to acquire Chatterbox Labs: Frequently Asked Questions
Becoming a Coder | Command Line Heroes
Where Coders Code | Command Line Heroes
Navegar por canal
Automatización
Las últimas novedades en la automatización de la TI para los equipos, la tecnología y los entornos
Inteligencia artificial
Descubra las actualizaciones en las plataformas que permiten a los clientes ejecutar cargas de trabajo de inteligecia artificial en cualquier lugar
Nube híbrida abierta
Vea como construimos un futuro flexible con la nube híbrida
Seguridad
Vea las últimas novedades sobre cómo reducimos los riesgos en entornos y tecnologías
Edge computing
Conozca las actualizaciones en las plataformas que simplifican las operaciones en el edge
Infraestructura
Vea las últimas novedades sobre la plataforma Linux empresarial líder en el mundo
Aplicaciones
Conozca nuestras soluciones para abordar los desafíos más complejos de las aplicaciones
Virtualización
El futuro de la virtualización empresarial para tus cargas de trabajo locales o en la nube