If you happened to miss this year’s Kubernetes Summer Camp, there’s some good news! The sessions were recorded and are available for on-demand viewing. Along with those, you’ll also get access to a variety of downloadable content, including a free O’Reilly e-book.

Here’s some of what you’ll learn.

Red Hat named a Leader in the 2023 Gartner® Magic Quadrant™

Red Hat was positioned highest for ability to execute and furthest for completeness of vision in the Gartner 2023 Magic Quadrant for Container Management.

1. A short history of Kubernetes

Linux containers took off in a big way after the introduction of Docker. As a new method for creating, shipping and running applications, Linux containers were the answer to many problems.

And while containerized applications solved a lot of problems, they came with their own challenges, including scaling, availability and other issues.

This is where Kubernetes comes in. Kubernetes is a container orchestrator that helps mitigate those challenges and issues, supporting multiple-cloud and other environments. This means you can deploy the same Kubernetes environment in different public clouds, private clouds, and even on bare metal.

Kubernetes was created by Google, based on their 15 years of experience with an internal project called Borg. They initially looked at open sourcing Borg, but realized that it would be easier to create a new project from scratch.

And so Kubernetes was born in June 2014. Red Hat announced its collaboration around Kubernetes in July 2014.

Learn more: 2021 Kubernetes Summer Camp - Kubernetes I

2. Red Hat OpenShift for rapid innovation

Red Hat OpenShift is a Kubernetes enterprise platform that provides a security-focused and consistent foundation for modern, hybrid-cloud application development and life-cycle management across physical, virtual, private and public clouds, and in edge computing.

Red Hat OpenShift includes an enterprise-grade Linux operating system, container runtime, networking, monitoring, registry and authentication and authorization solutions. It allows you to automate life-cycle management to get increased security, tailored operations solutions, easy-to-manage cluster operations and application portability.

A number of advanced capabilities are available in Red Hat OpenShift, including:

  • Operators, which provide automated installation, upgrades and life-cycle management.

  • Red Hat OpenShift Service Mesh, which gives you a uniform way to manage, connect and observe applications.

  • Red Hat OpenShift Serverless, which allows an application to use compute resources and automatically scale up or down based on use, driven on-demand from a number of event sources.

  • Red Hat OpenShift Pipelines, which provide a streamlined user experience through the OpenShift console developer perspective, command-line interfaces (CLIs), and integrated development environments (IDEs).

  • Red Hat OpenShift Virtualization, which brings virtual machines to OpenShift.

  • Edge computing, which includes 3-node clusters as well as remote worker nodes.

  • Databases and data analytics, which provide methods for ingesting, storing, processing and analyzing datasets from a variety of sources.

  • AI/ML on OpenShift, which accelerates the rollout of intelligent applications across the hybrid cloud.

Learn more: Opening session, including an OpenShift developer tour

3. An introduction to Knative Serverless

The Knative serverless environment lets you deploy code to a Kubernetes platform, such as Red Hat OpenShift. With Knative, you create a service by packaging your code as a container image and handing it to the system. Your code only runs when it needs to, with Knative starting and stopping instances automatically. This helps reduce operations costs, since you can pay for cloud-based compute time only when it’s needed, instead of managing your own servers.

What is serverless computing?

The first thing to realize is that “serverless” doesn't mean there isn’t a server. You just don't have to worry about the server(s) doing the work, but only the work itself.

Serverless computing involves building and running applications that do not require server management. Instead, applications are bundled as one or more functions, uploaded to a platform, and then automatically executed and scaled as needed.

Developers only have to deploy applications, and never have to worry about where they’re run, how they’re run, how the network is handled, or any of that other stuff.

Kubernetes already does a pretty good job at this, especially if you’re using OpenShift, but serverless takes it even further—saving effort and resources by scaling things up automatically if there’s a lot of traffic, or scaling down to zero if there’s no traffic at all.

Serverless evolved from the microservices paradigm. According to the talk, most of the time, microservices are long-lived processes that will run forever until you stop the node or undeploy the workload. And most of the time they’re using a request-response model like using HTTP.

In contrast, serverless functions or workloads are controlled by the cloud—your Kubernetes cluster will take care of deploying and running it only when it’s needed, and these processes can be very short-lived. This has brought about multiple new programming models, such as event-driven async.

What is Knative?

Knative builds on this, being a Kubernetes-based platform for deploying and managing modern serverless workloads, whether on premises, in the cloud, or in a third-party data center.

Knative further eliminates the tasks related to provisioning and managing servers, allowing developers to focus solely on application development.

Why use Knative?

The “Kubernetes-based” part is important because it means that Knative is Kubernetes native (thus the name). Where all the other editors like Google Cloud Function, Azure Functions, Amazon Lambda are great, they only work if you deploy on that vendor’s cloud, meaning there is vendor lock-in.

With Knative, it doesn’t matter what cloud provider(s) you use, or if it’s on- premise, or some hybrid cloud combination of the two. If there’s Kubernetes available, your Knative services will work, eliminating the problem of vendor lock-in entirely.

Knative capabilities

Knative also provides a number of middleware components that you can use to extend Kubernetes. And that is what Knative does—Knative uses the basic building blocks of Kubernetes and adds new blocks to it, so everything stays within the Kubernetes paradigm.

Knative capabilities include:

  • Scale-to-zero: If there’s no traffic on your pod, nothing will be running. This means no memory and less CPU, so you’re saving money and resources.

  • Scale-from-zero: If you have a traffic spike for whatever reason, Knative will scale everything up automatically.

  • Configurations and revisions: If you want to do some Blue/Green deployments or some Canary deployments, you can do that with Knative.

  • In-cluster image building: Tekton is a powerful, Knative-based framework for creating continuous integration and delivery (CI/CD) systems that can be used to deploy to any Kubernetes cluster across multiple hybrid cloud providers.

  • Traffic splitting: Knative allows you to split traffic between revisions, and choose how much traffic is sent to each.

  • Eventing System: Triggering workloads on specific events.

Kubernetes vs. Knative deployments

A traditional Kubernetes deployment has a container image hosted somewhere with a YAML file that describes the deployment. That is applied to the cluster, and the deployment creates a replica set that then creates the number of pods specified, and then a service is created that matches the labels of the pods.

And there are other steps needed in various situations, such as having to create routes if the service can’t access a cloud balancer. So, a traditional Kubernetes deployment has quite a few steps, including long YAML files and various moving parts.

Figure 1.

In contrast, with serverless you don’t have to worry about infrastructure at all. You have a container image and you run it on a cluster, and that’s it, everything is taken care of for you.

With Knative, you just have to write one resource file called a Knative service (which is really small) that says “I want to run this image.” Once it gets applied to your cluster, the Knative Operator will pick it up and automatically create the resources required, including the deployment, the service, a route (if needed), and a configuration resource that can manage revisions, making it easy to roll back to a previous version if needed.

Figure 2.

Knative tutorial

If you’re interested in getting started with your serverless journey, there’s a Knative tutorial available on GitHub.

The tutorial walks you through setup, Knative serving, Knative eventing, Apache Camel K, and some more advanced concepts and applications.

Learn more

Kubernetes Summer Camp videos include:

  • Opening session, including an OpenShift developer tour with Serena Nichols, OpenShift Developer Tooling Product Manager and Distinguished Engineer at Red Hat

  • Kubernetes I Deep Dive with Edson Yanaga, Director of Developer Experience at Red Hat

  • Knative Serverless Deep Dive with Sebastien Blanc, Director of Developer Experience at Red Hat

And the free downloadable content includes:

  • Kubernetes patterns for designing cloud-native apps (266-page O’Reilly e-book)

  • Red Hat OpenShift Container Platform; Kubernetes for rapid innovation (product overview)

  • The benefits of training on Red Hat OpenShift (infographic)

  • Red Hat’s OpenShift serverless for hybrid, legacy and greenfield (research report)

  • 7 characteristics of successful hybrid cloud strategies (guide for IT leaders)