Recently, I hosted a Red Hat webinar titled “Kubernetes is the Platform. What’s next?” during which I highlighted the current Kubernetes architecture and capabilities, some of the new innovation happening within the broader open source community, and how much of this innovation is making its way into Red Hat OpenShift Container Platform.
There were great questions from the audience afterward but due to time constraints, I wasn’t able to get to every one. I’ve tackled the remaining questions below and provided some additional links for details or learning.
NOTE: Many questions were similar or overlapping, so many have been consolidated.
Q1: Please help me understand Containers-as-a-Service (CaaS). With Platform-as-a-Service (PaaS), I understand that vendors provide the 'platform' that users can run their apps on top of. However, my understanding is that with CaaS there's no such thing like providers provide 'containers' as service I think providers/vendors do still provide a 'platform' to run containers (instead of apps) on top. Do you agree? Or am I missing something?
A1: In the original NIST definition of Cloud Computing (c.2011), they identified Infrastructure-as-a-Service (IaaS), PaaS and Software-as-a-Service (SaaS). At the time, IaaS implied that the unit of application packaging and isolation was a virtual machine (VM), since that was the most commonly used technology. Since then, Linux containers have grown in use and maturity. So we could say that a platform (e.g., Red Hat OpenShift Container Platform) which provides a management framework for containers (using Kubernetes) is an IaaS. But, that could confuse the marketplace, so the term CaaS is now more frequently used to specify that the expected packaging and isolation is using containers. In addition, Red Hat OpenShift Container Platform provides a number of additional capabilities that can improve developer productivity, or PaaS capabilities, so it can be considered both a CaaS and a PaaS, depending on how the platform is utilized by both developers and operations teams.
Q2: What drives C-level stakeholders to buy?
A2: In the context of Kubernetes platforms, Red Hat customers a variety of industries have shown that containers and Kubernetes are able to deliver positive results. Stories about organizations from a variety of industries and geographies that have made positive impacts to their business using Red Hat OpenShift Container Platform can be found on our customer success page, or by watching recorded sessions from recent OpenShift Commons Gathering events.
Q3: Can I use our own container registry?
A3: Yes. Kubernetes does not include a container registry as part of the open source project, so external registries can be used. Red Hat OpenShift provides an integrated container registry, as well as the Red Hat Quay registry (Enterprise and SaaS offerings).
Q4: If a data center has x86 servers and ARM servers can Kubernetes / Red Hat OpenShift Container Platform manage workloads across both infrastructures?
A4: Yes, Kubernetes / Red Hat OpenShift Container Platform can support both x86 and ARM servers. They may be dependencies on the version of operating system and chipset that you’re running, so check the documentation to make sure you have the proper versions for compatibility.
Q5: Our company is concerned with container security. Does Red Hat OpenShift Container Platform bridge the gap between registry governance and Kubernetes?
A5: This should probably be broken down into two parts:
[1] Security of container content that get into the registry and is within the registry,
[2] Security of the platform where the containers run
Regarding [1], most commercial container registry offerings either have embedded image scanning (for vulnerabilities) and/or image signing. These capabilities are available with both the Red Hat OpenShift container registry, Red Hat Quay, and several OpenShift Commons ecosystem partners. In addition, the Red Hat Container Catalog (RHCC) provides a source for certified, up-to-date, and more secure container images.
We discussed some of these topics on recent episodes of the PodCTL podcast (Eps.14, Eps.32).
Regarding [2], Red Hat believe in defense-in-depth and that proper security for containerized applications should come from several layers of security. See this whitepaper for more details.
Q6: Can you use Kubernetes to orchestrate non-containerized applications?
A6: Currently, Kubernetes only provides (supported) orchestration for containerized applications. But there is an emerging open source project, called “Kubevirt”, which is building a virtualized API for Kubernetes in order to manage virtual machines. Red Hat has plans to offer this as “Container-Native Virtualization” (CNV). This was previewed at Red Hat Summit in May 2018 during the day 1 keynote.
Q7: What higher-level frameworks in the 'developer tooling' space did you allude to?
A7: While “developer frameworks” are outside the scope of the Kubernetes project, a number of projects have emerged to look at ways to make it easier for developers to build cloud-native applications that interact with elements of Kubernetes, as well as abstract away some of the complexity that might be felt in working with Kubernetes YAML-based manifest files.
We discussed some of the emerging ways that developers get applications into Kubernetes on PodCTL #37, but here are some other emerging projects (note: most of these are in very early stages of development and may not be recommended for production uses):
• OpenShift.io
• OpenShift ODO
• Draft
• Brigade
• Metaparticle
• Pulumi
• Ballerina
Q8: What is the relation between microservices and serveless? How are Kubernetes playing /impacting on these 2 concepts?
A8: Microservices is the concept of building applications in (relatively) smaller elements, typically confined to a specific business task, so that individual components can be updated independently of the broader system. It is a contrast to previously built “monolithic” applications, where all/most functionality was linked more closely together, making it more difficult to update or add new functionality. Microservices are often used in conjunction with new, cloud-native application models.
Serverless is the concept of application platforms where application developers do not need to have any awareness of the underlying infrastructure resources or the scaling of those resources. Applications in a serverless environment are defined as “functions”, or small chunks of code which perform a specific task or function. Because of this, the terms serverless and Function-as-a-Service (FaaS) are often intertwined or used interchangeably.
Kubernetes has supported patterns and frameworks used for microservice applications since v1.0.
Recently, a number of open source serverless projects have been created which run on Kubernetes (e.g., Fission, Fn, Kubeless, Nuclio, OpenFaaS, OpenWhisk, Riff). We discussed aspects of serverless on Kubernetes here and here. We highlighted OpenWhisk on OpenShift at Red Hat Summit in May 2018, and announced an early developer preview of a new serverless offering based on OpenWhisk called Red Hat OpenShift Cloud Functions.
Q9: Is your concept of 'Service Brokers' similar to Kubernetes 'ExternalName' Services? And if so how do Service Brokers go beyond that type of Service?
A9: ExternalName enables Kubernetes to return the name of a resource that is external to the Kubernetes cluster. Service Broker is based on the Open Service Broker standard. Service Broker is often tied to a Service Catalog entity, which can create and manage an eternal service or resource. More details on how the Service Catalog interacts with a Service Broker is provided here, in a discussion with one of the SIG engineering leads.
Q10: Any recommendations for CI pipelines integrations?
A10: A number of CI pipelines provide native integration with Kubernetes. OpenShift provides a number of integration strategies and deployment models for integrating with CI pipelines.
Q11: How does CoreOS compliment OpenShift? Is there any redundancy in the stack?
A11: Many elements of the CoreOS technologies, acquired in January 2018, are planned to be integrated into Red Hat platforms (Red Hat OpenShift, Red Hat CoreOS, Red Hat Quay). In addition, emerging technologies such as the Operator Framework are planned to become core elements of the Red Hat OpenShift Container Platform.
More details about the integrations are provided in this blog.
Several sessions from Red Hat Summit (OpenShift Roadmap, Red Hat CoreOS Roadmap, Future of Kubernetes Platform) in May 2018 provide more details about the integrations.
Q12: Can Kubernetes exist without Docker and where you do see this evolving?
A12: In the early version of Kubernetes, the only container runtime that was supported was docker. Since then, some other container runtimes have emerged, as well as the standardization efforts within the Open Container Initiative (OCI). This lead the Kubernetes project to create the concept of a Container Runtime Interface (CRI), which provides a common interface and abstraction for multiple container runtimes, such as CRI-O and containerd. In the future, using tools like Buildah, Podman, Skopeo and others, I anticipate it will be possible to run Kubernetes without docker.
Q13: Would JBoss integrations run as CRDs?
A13: This question was asked in the context of stateful applications and middleware services being deployed using the Operator Framework, which interacts with Kubernetes CRDs. Currently, the plan for OpenShift is to eventually have all middleware services and applications be deployed using the Operator Framework or to interact with the Operator Framework for day-2 operations.
Q14: How do you do automated testing in Kubernetes?
A14: In most cases, automated testing is done in conjunction with a Continuous Integration (CI) platform and the associated plugins for testing tools (e.g. Selenium, Cucumber, Sonarqube, etc.). As mentioned in Q9 (above), there are several ways to integrate CI/CD platforms with Kubernetes/OpenShift.
Über den Autor
Mehr davon
Nach Thema durchsuchen
Automatisierung
Das Neueste zum Thema IT-Automatisierung für Technologien, Teams und Umgebungen
Künstliche Intelligenz
Erfahren Sie das Neueste von den Plattformen, die es Kunden ermöglichen, KI-Workloads beliebig auszuführen
Open Hybrid Cloud
Erfahren Sie, wie wir eine flexiblere Zukunft mit Hybrid Clouds schaffen.
Sicherheit
Erfahren Sie, wie wir Risiken in verschiedenen Umgebungen und Technologien reduzieren
Edge Computing
Erfahren Sie das Neueste von den Plattformen, die die Operations am Edge vereinfachen
Infrastruktur
Erfahren Sie das Neueste von der weltweit führenden Linux-Plattform für Unternehmen
Anwendungen
Entdecken Sie unsere Lösungen für komplexe Herausforderungen bei Anwendungen
Original Shows
Interessantes von den Experten, die die Technologien in Unternehmen mitgestalten
Produkte
- Red Hat Enterprise Linux
- Red Hat OpenShift
- Red Hat Ansible Automation Platform
- Cloud-Services
- Alle Produkte anzeigen
Tools
- Training & Zertifizierung
- Eigenes Konto
- Kundensupport
- Für Entwickler
- Partner finden
- Red Hat Ecosystem Catalog
- Mehrwert von Red Hat berechnen
- Dokumentation
Testen, kaufen und verkaufen
Kommunizieren
Über Red Hat
Als weltweit größter Anbieter von Open-Source-Software-Lösungen für Unternehmen stellen wir Linux-, Cloud-, Container- und Kubernetes-Technologien bereit. Wir bieten robuste Lösungen, die es Unternehmen erleichtern, plattform- und umgebungsübergreifend zu arbeiten – vom Rechenzentrum bis zum Netzwerkrand.
Wählen Sie eine Sprache
Red Hat legal and privacy links
- Über Red Hat
- Jobs bei Red Hat
- Veranstaltungen
- Standorte
- Red Hat kontaktieren
- Red Hat Blog
- Diversität, Gleichberechtigung und Inklusion
- Cool Stuff Store
- Red Hat Summit