The cloud native approach to developing and deploying applications is increasingly being adopted in the context of edge computing. However, edge workloads and use cases span a myriad of locations, technical requirements, and physical footprints.
At Red Hat, we are seeing an emerging pattern in which some infrastructure providers and application owners want a consistent workload life cycle approach across the business. This includes their on-premises, hybrid/multicloud, and edge architectures as well as continuous operations and management of sites that might have intermittent or very low bandwidth connectivity back to a central site.
Gartner reconoció a Red Hat como empresa líder en el informe de 2023 Gartner® Magic Quadrant™
Red Hat ocupó el puesto más alto en el Magic Quadrant de Gartner de 2023 para la gestión de contenedores no solo por su capacidad de ejecución, sino también por la integridad de su visión.
For example, think of locations such as cruise ships that spend some of their time docked and directly connected to a regional data center, while other times they are out at sea with a slow, high-latency satellite connection.
We also observed that in these remote locations, as the distance between an edge site and the central management hub grows, the physical space for systems decreases. This means these edge locations operate with very limited space as well as power and cooling, creating an environment suited for a single server.
Introducing Red Hat OpenShift in a single node
Until today the smallest supported cluster size (including control and worker nodes) was the three node cluster. As of OpenShift 4.9, we now have a full OpenShift deployment in a single node. This fully supported topology joins the three node cluster and remote worker topologies to offer three options to meet more customer requirements in more edge environments. Single node OpenShift offers both control and worker node capabilities in a single server and provides users with a consistent experience across the sites where OpenShift is deployed, regardless of the size of the deployment.
Because a single node offers both control and worker node functionality, users who have adopted Kubernetes at their central management sites and who wish to have independent Kubernetes clusters at edge sites can deploy this smaller OpenShift footprint and have minimal to no dependence on the centralized management cluster and can run autonomously when needed.
Just like multi node OpenShift, the single node deployment extends the OpenShift experience, taking advantage of the same skills and tools and bringing consistent upgrades and life cycle management throughout the container stack, including the operating system, feature-rich Kubernetes (including cluster services), and applications.
Keep in mind a single node OpenShift deployment differs from the default self-managed/highly-available cluster profile in couple of ways:
-
To optimize system resource usage, many operators are configured to reduce the footprint of their operands when running on a single node OpenShift.
-
In environments that require high availability, it is recommended that the architecture be configured in a way in which if the hardware was to fail, those workloads are transitioned to other sites or nodes while the impacted node is recovered.
Think cell towers, not data centers
Single node OpenShift focuses on reducing the prerequisites for OpenShift deployment on bare metal nodes, bringing the barrier to entry down to a minimum, further simplifying and streamlining the deployment experience of OpenShift. This is especially useful when deploying environments that need to occupy the absolute smallest footprint and have minimum external requirements.
One example of this use case is seen in telecommunication service providers' implementation of a Radio Access Network (RAN) as part of a 5G mobile network.
In the context of telecommunications service providers' 5G RANs, it is increasingly common to see "cloud native" implementations of the 5G Distributed Unit (DU) components. Due to latency constraints, the DU needs to be deployed very close to the Radio Units (RUs), for which it is responsible. In practice, this can mean running the DU on anything from a single server at the base of a cell tower to a more datacenter-like environment serving several RUs.
A typical DU example is a resource-intensive workload, requiring 6 dedicated cores per DU and several DUs packed on the server, with 16-24 GB of RAM per DU (consumed as huge pages), multiple single route I/O virtualization (SR-IOV) NICs, FPGA, or GPU acceleration cards carrying several Gbps of traffic each.
One crucial detail of this use case is ensuring that this workload can be "autonomous" so that it can continue operating with its existing configuration, even when any centralized management functionality is unavailable. This is where single node OpenShift comes in.
Seeing it in action
If you would like to get your own single node OpenShift deployment, you are more than welcome to do. We have a blog that walks you through the installation step-by-step.
Sobre los autores
Moran Goldboim joined Red Hat in 2010 and is now a Senior Principal Product Manager.
Eran Cohen joined Red Hat in January 2020. He is a software developer with a passion for infrastructure and technology and has expertise in design, prototype, and execution. He is currently focused on the OpenShift Assisted Installer and Single Node OpenShift.
Navegar por canal
Automatización
Las últimas novedades en la automatización de la TI para los equipos, la tecnología y los entornos
Inteligencia artificial
Descubra las actualizaciones en las plataformas que permiten a los clientes ejecutar cargas de trabajo de inteligecia artificial en cualquier lugar
Nube híbrida abierta
Vea como construimos un futuro flexible con la nube híbrida
Seguridad
Vea las últimas novedades sobre cómo reducimos los riesgos en entornos y tecnologías
Edge computing
Conozca las actualizaciones en las plataformas que simplifican las operaciones en el edge
Infraestructura
Vea las últimas novedades sobre la plataforma Linux empresarial líder en el mundo
Aplicaciones
Conozca nuestras soluciones para abordar los desafíos más complejos de las aplicaciones
Programas originales
Vea historias divertidas de creadores y líderes en tecnología empresarial
Productos
- Red Hat Enterprise Linux
- Red Hat OpenShift
- Red Hat Ansible Automation Platform
- Servicios de nube
- Ver todos los productos
Herramientas
- Training y Certificación
- Mi cuenta
- Soporte al cliente
- Recursos para desarrolladores
- Busque un partner
- Red Hat Ecosystem Catalog
- Calculador de valor Red Hat
- Documentación
Realice pruebas, compras y ventas
Comunicarse
- Comuníquese con la oficina de ventas
- Comuníquese con el servicio al cliente
- Comuníquese con Red Hat Training
- Redes sociales
Acerca de Red Hat
Somos el proveedor líder a nivel mundial de soluciones empresariales de código abierto, incluyendo Linux, cloud, contenedores y Kubernetes. Ofrecemos soluciones reforzadas, las cuales permiten que las empresas trabajen en distintas plataformas y entornos con facilidad, desde el centro de datos principal hasta el extremo de la red.
Seleccionar idioma
Red Hat legal and privacy links
- Acerca de Red Hat
- Oportunidades de empleo
- Eventos
- Sedes
- Póngase en contacto con Red Hat
- Blog de Red Hat
- Diversidad, igualdad e inclusión
- Cool Stuff Store
- Red Hat Summit