Knative’s first year: Where it’s at and what's next in serverless

Today we celebrate the one year anniversary since the Knative project came to the world of Kubernetes. Red Hat is one of the top vendor contributors focused on bringing the project to enterprises looking to enable portability of serverless applications in hybrid environments.

Knative helps developers build and run serverless applications anywhere Kubernetes runs—on-premise or on any cloud. It was originally started by Google, but is maintained by the community, which includes companies like Red Hat, Google, IBM and SAP and a great ecosystem of startups. The project aims to extend Kubernetes to provide a set of components for deploying, running and managing modern applications running serverless. Serverless computing means building and running applications that do not require server management and that scale up and down (even to zero) based on demand, which usually happens through incoming events. Knative was announced last year with a number of goals to help make it easier for developers to focus on the applications, versus the underlying infrastructure, and our work together has coalesced and consolidated into this initiative as a community versus attempting to handle it alone. 

Since its initial unveiling to the community, 80+ organizations have contributed to the project and we continue to work together to further bring the project features to users. 

“We worked with open source leaders like Red Hat to develop Knative and appreciate the community collaboration and number of contributions that have taken place over the past year. Knative enables developers to focus on writing code without the need to worry about the tedious, difficult parts of deploying, running and managing their application. We believe Knative has the potential to provide a better overall developer experience on Kubernetes,” said Ryan Gregg, product manager, Knative and Google Cloud Run at Google.

Knative: Looking a year back

Knative company contributions as of July 15, 2019

Knative company contributions as of July 15, 2019 (source)

 

Knative was originally made up of three main components - Build, Serving and Eventing, but along the way the Build module evolved and the need for a complete CI/CD pipeline solution became more clear. This originated Tekton, a Kubernetes-native CI/CD pipeline, the foundation of OpenShift Pipelines in Red Hat OpenShift 4. The current main components of Knative are:

  • Serving: Offers a request-drivel model that serves containerized workloads that auto-scale based on demand and that can "scale to zero."
  • Eventing: Common infrastructure for consuming and producing events to stimulate applications.

Along with those components, there is also `kn`, the official Knative command-line interface (CLI) that enables a great developer experience where a user can create a simple application as follows: 

 kn service create myApp --image dev.local/ns/image:latest

There are many other options available as well that allow users to specify limits for CPU or memory consumption, and limits for scale like concurrency and number of instances per service. Today, kn is mostly covering Serving aspects, but is now looking at Eventing features and aims to enable use cases for that module as well. Think of `kn` as kubectl but for Knative. 

Although Serverless is often commonly associated with Functions, Knative goes beyond and enables a better overall developer experience for Kubernetes for almost any class of applications and microservices. 

Serverless and Red Hat OpenShift

Coupled with Red Hat OpenShift, Knative can further enable portability of operations and application development in hybrid environments and that is, in essence, what Red Hat OpenShift Serverless offers. It is the foundation for running Serverless workloads on the platform. It enables developers to build event-driven applications that can scale on-demand based on a set of best practices that enable fast development of cloud-native applications or almost any containerized workload.  

With the Operators available in OpenShift, the installation experience and dependency management are handled in a streamlined way so that installing, updating or uninstalling Knative from a cluster is a breeze. Using the Operator Lifecycle Manager available in OpenShift, cluster administrators can rest assured that installation procedures are consistent and that packaging and dependencies are handled by the cluster. The user interface provides a consistent user experience that doesn't require deep understanding of Knative itself to provision a production ready installation. 

Knative Operator Install - Red Hat OpenShift

Red Hat donated the Operator code for Knative Serving and Eventing to the Knative project in order to make sure that others in the community can also leverage the similar facilities to install, upgrade and operate Knative at scale. Soon, we expect to have those Operators released by the Knative project and available on Operatorhub.io

As part of another set of integrations when running in OpenShift, Knative can make use of services available on the platform, such as Monitoring, Logging and Metering. Using Operator Metering as an example, users can enable the following predefined reports: 

  • Accumulated CPU seconds used per Knative service over the report time-period
  • Average Memory consumption per Knative service over the report time-period

A sample report for CPU usage for two Knative services looks like (simplified):

start         end namespace      service service_cpu_seconds

----------------------------------------------------------------------------------

2019-06-01    2019-06-30 default        hello 298.535220
2019-06-01    2019-06-30 default        random-generator 418.119120
 

More reports can be created and customized per deployment, but this is a great example of how the integration with the platform services can provide a better experience. Since Serverless is all about consumption-based pricing, platform vendors and administrators can understand which services, namespaces or teams are actually consuming more resources versus which teams have underutilization. This is a hard problem to solve for serverless workloads that are bursty and unpredictable by nature. 

Another important development for OpenShift Serverless is how we are enabling the serverless features within the platform itself by enabling this capability in already known workflows for our users. An example of that is in Red Hat OpenShift 4.2. The Developer perspective in the Web Console enables Serverless for Import from Git and Deploy Image workflows, making the overall experience very familiar to our customers.

example of Knative - Red Hat OpenShift Serverless

Integrated with OpenShift Service Mesh and OpenShift Pipelines, we believe this stack of providers offers a powerful combination of technologies delivering a great user experience, something customers trying to understand all the components and projects in this space were looking for. 

Connecting Serverless to the rest of your enterprise with Camel-K 

Serverless workloads need to interact with a diverse set of services across the hybrid cloud landscape, including on-premise, private cloud and public cloud. To address this need, the Camel-K project in Apache Camel has been designed from the ground up to address connectivity and integration requirements for serverless applications. With Camel-K, serverless developers can tap into the rich set of 200+ connectors as event types for their serverless applications and functions. Developers can also leverage the lightweight Camel DSL inside Knative services to perform stateless orchestration of downstream services. 

Extending Serverless with Kiali and Service Mesh

The OpenShift Serverless feature set can also be augmented by OpenShift Service Mesh components. This can provide advanced traffic shaping and observability into the serverless applications. For example, using Kiali, users can visualize the topology of those services and explore a number of metrics of how applications are running at scale.

Operator ecosystem 

Recently, TriggerMesh announced that the open source TriggerMesh Operator is available for OpenShift. By using the serverless capabilities available in OpenShift 4 enabled by Knative, the TriggerMesh Operator allows OpenShift users to install the TriggerMesh Serverless platform bringing their opinionated user experience to develop serverless applications. You can read more about it on their post

What’s ahead for serverless

Red Hat has been working alongside the community on the early creation and maturation of Knative and the work has come a long way in just one year—but we still have work to do. While Knative is currently in Developer Preview in Red Hat OpenShift Serverless, as the project matures and nears general availability, we look forward to further maturing it in OpenShift, transitioning to Technology Preview very soon. The impact the project already had in helping consolidate serverless solutions for Kubernetes is a great sign of maturity. The next steps now are towards stability of Eventing capabilities and the actual adoption of the project as the basis for a Function framework. We believe this has the potential to increase the ease of Kubernetes for any developer running stateless workloads or simply just getting started with Kubernetes.

Get started with Knative and try it out on Red Hat OpenShift 4:


About the author

William Markito Oliveira is an energetic and passionate product leader with expertise in software engineering and distributed systems. He leads a group of product managers working on innovative and emerging technologies.

Read full bio