Skip to main content

12 Factor App meets Kubernetes: Benefits for cloud-native apps

Applying the 12 Factors while architecting your cloud-native applications helps keep them agile, scalable, and portable for years to come.
Image
Close up of architectural detail

Photo by Scott Webb from Pexels

The 12 Factor App methodology emerged about a decade ago, years before containers became the established way for packaging and deploying applications. The 12 Factor App principles were intended as guidelines for making an application more suitable for cloud-based deployments by enforcing characteristics that make applications disposable and easy to scale.

According to the 12 Factor App principles, software as a service (SaaS) should:

  • Use declarative formats for setup automation, to minimize time and cost for new developers joining the project.
  • Have a clean contract with the underlying operating system, offering maximum portability between execution environments.
  • Are suitable for deployment on modern cloud platforms, obviating the need for servers and systems administration.
  • Minimize divergence between development and production, enabling continuous deployment for maximum agility.
  • And can scale up without significant changes to tooling, architecture, or development practices.

How to use the 12 Factors with containers and Kubernetes 

How you build, package, and deploy workloads has evolved significantly in the years since the 12 Factors were released. Containers, Kubernetes, and cloud-native are now mainstream technologies that enable you to create portable, scalable, and robust applications. Even so, the 12 Factors remain relevant in today's ecosystem.

This article describes the 12 Factor App methodology and applies the concepts to applications in the context of Linux containers and Kubernetes.

1. One codebase tracked in revision control, many deploys

A codebase is the repository of code that makes up your application, and you should manage it continuously with a version control system, preferably Git. You identify differing releases (dev, test, prod) using differing branches.

Kubernetes and containers are created from text-based representations (such as Kubernetes YAML and Dockerfiles). Automation tools such as Ansible describe the expected system state in their own files. These artifacts can evolve, and it's smart to manage them with source control, just as you manage application code.

When you can express applications and infrastructure as code and configurations that are being version controlled by Git, you can apply techniques such as GitOps and continuous integration/continuous deployment (CI/CD) with greater ease. Making Git the single source of truth helps prevent promoting unexpected changes, makes your application state reproducible, and provides accountability for changes introduced in your environments.

[ You might also be interested in reading Implementing single source of truth in an enterprise architecture. ]

2. Explicitly declare and isolate dependencies

Explicitly declare application dependencies and manage them with a package manager. Every language has tools available for declaring and managing dependencies. For example, Maven is extremely popular with Java applications, and NPM is the default package manager for Node.js applications. Dependencies should not be committed to Git. Instead, version the configuration that describes the dependencies. The package manager can retrieve those dependencies at build time and bundle them with the application.

The application should be fully self-contained. You can eliminate any coupling to the infrastructure by ensuring that everything the app needs to operate gets bundled as a single deployable unit. By doing this, you can easily move your workload to different platforms or infrastructure, and you can eliminate potential discrepancies between environments when the application and infrastructure are well decoupled.

3. Store config in the environment

Application configuration is usually specific to the environment where the app gets run (dev, test, prod). It is generally good hygiene to ensure you don't build the application for specific environments. Build an application one time, and you apply the configuration at runtime or when an application is starting up.

The best way to do this is to ensure that application configuration for specific environments is not bundled inside the app but rather made available by the environment where the workload is running. Kubernetes provides several constructs to easily attach environment-specific configuration to your running pods through environment variables and ConfigMaps. Ensure anything that varies between different environments (or deployments) gets extrapolated in one of these two options and attached to your pods at runtime.

When you deploy an application running directly on a host, the best way to keep environment-specific configuration tied to the environment is to make it available on the host. Container and container orchestration technologies make this straightforward through ConfigMaps, environment variables, and secrets. You shouldn't need to bundle environment-specific configurations within your container. Instead, extrapolate it into a ConfigMap so that you can run this same container image in another environment with a different config.

[ Download this checklist to get details on 10 considerations for Kubernetes deployments. ]

4. Treat backing services as attached resources

Backing services are services that are accessed over the network. Decouple databases, message queues, third-party services, or any other type of external resources from the system as much as possible and interface with them using APIs. If you need to swap out the application database, you don't want to have to modify your codebase to account for the change.

APIs should adhere to consistent contracts allowing the underlying implementations to change without clients being aware. Store connection information to external services within Kubernetes ConfigMaps and environment variables so that you do not need to rebuild the container image if connection information changes. Whenever possible, communicate interactions with external services using APIs with consistent contracts.

5. Strictly separate build and run stages

Keep strict separations between the build, release, and run stages. Many organizations choose to automate the building, testing, and promotion of workloads through CI/CD toolchains. CI/CD pipelines typically consist of a set of steps (gates) that perform specific actions chained together as part of the software release lifecycle.

Have clearly defined steps as part of your workflow, and no particular step should handle more than one function. For example, a standard application pipeline may chain together various steps to build, test, package, archive, deploy, and promote an application into production environments.

By splitting your pipeline into a set of sequential tasks, you have greater reproducibility, accountability, and confidence about the steps required to move an application through the release lifecycle. The entire process is reproducible, so developers can easily identify any release failures and have greater insight into the steps required to go from code in source control to an application serving production traffic.

In the world of containers, a pipeline should only build a container image one time and then test, promote, and deploy the image as a series of running container instances. By building only one time, the promoted container's bits will be identical across environments, leading to greater confidence in production deployments and reduced discrepancies between different deployments if you find yourself in a troubleshooting scenario.

6. Execute the app as one or more stateless processes

Containers, by nature, are ephemeral, meaning the data stored inside a container dies when the container goes away. Leveraging containers makes application developers think of better ways to store state in their application, whether they're storing data in a database or maintaining the state within an external cache. Minimizing state in containerized workloads helps ensure that applications can scale up and down easily without affecting the user's experience.

7. Export services with port binding

Traditional enterprise applications often depend on an application container or separate process to handle requests from outside the environment. The application is dependent on another system to handle something that (according to the 12 Factor App principles) should be handling the port binding and accepting traffic itself.

Many modern frameworks and cloud-native toolkits handle this for you, regardless of the language you use to write your application. Once you deploy your application as a container, you will want to bind to the container's port. You should abstract communication to the container through this port using the Kubernetes service object if the workload is exposed only internally to the cluster.

If the workload accepts Ingress traffic coming from outside the cluster, there are many methods you can use, such as Ingress Controllers, node ports, and OpenShift routes. Containers simplify networking and port collisions since limitations on hosts and port collisions of running workloads get greatly reduced by way of software-defined networks in Kubernetes platforms.

[ Build an architecture that meets your organization's needs by taking the online course Developing cloud-native applications with microservices architectures. ]

8. Scale out with the process model

Kubernetes allows you to ensure a given number of pods are running at any given time through a series of different controllers. For example, ReplicationControllers ensure that a desired number of pods will always be running on a Kubernetes cluster. Horizontal Pod Autoscalers allow users to define their resource thresholds for scaling up pods based on CPU and memory consumption as well as the maximum and the minimum number of replicas that should be present on the platform at any given time.

9. Maximize robustness with fast startup and graceful shutdown

The idea of disposability began around the notion that you can use the Linux kernel signal model to interact with running processes. In the world of Kubernetes, pods are the processes that the control plane manages. When you have designed your containers to be minimal in size and stateless, you can scale up and down with minimal impact to end users.

Consider implementing liveliness and readiness probes within Kubernetes specific to your workload so that Kubernetes knows when your container is completely initialized and ready to receive load or when it is unhealthy and needs to be disposed or recreated. When designing your containers, choose to leverage base container images intended to run in a container (not simply backported from a host-based deployment). These images often consider the 12 Factors to make workloads portable to the cloud and eliminate a lot of effort for developers.

10. Keep development, staging, and production as similar as possible

Organizations often choose to package their workloads into containers for portability. When you build your container image in one environment, that image should run predictably on any infrastructure or environment where you initialize it so long as you pack everything needed to run the container into the image.

There are always cases where minimal environmental drift can occur, but one of the best ways to avoid this is to standardize the same distribution of Kubernetes across all environments. This provides container platform users with a consistent experience across all environments using the platform.

[  Learn how IT modernization works on multiple levels to eliminate technical debt. Download the Alleviate technical debt whitepaper. ]

11. Treat logs as event streams

Application logs are critical for understanding how an application is behaving while running. When organizations move from application deployments on the host to containerized applications, maintaining and making sense of logs can become a big problem.

The number of pods that make up your application move from host to host, can scale up and down, and need to be viewable and searchable by application teams and operators. Many organizations already have systems for aggregated logging, such as Splunk, to ingest, index, and make logs across many systems searchable from a single pane of glass. Technologies like Splunk, decoupled from the application, just need to know how to retrieve logs to ingest them. To do this, make sure your containerized workloads are dumping their logs to STDOUT/STERR.

Many modern application libraries and frameworks do this for you out of the box, as they have adapted to this practice. Therefore, most of the work is already done for you. If you are leveraging older frameworks or managing much of the logging manually, you'll want to ensure that you are logging to STDOUT so that your application logs will be safe.

12. Run admin and management tasks as one-off processes

Some management tasks are executed rarely (like seeding a database) or semi-frequently (a database backup) by a workload's admin. Consider implementing one-off tasks that happen rarely or on time-based intervals as Kubernetes Jobs.

You may need to initialize some management processes based on environmental conditions, such as elaborate recoveries from failures or rebalancing of resources under load. A common Kubernetes architecture is the Operator pattern, where the logic to handle these conditions gets implemented in a custom controller that runs on Kubernetes with your workload. The Operator checks the system's state in a continual loop and executes the reconciliation logic when it meets predefined conditions. You can build operators to handle these types of complex management tasks consistently by leveraging the Operator SDK.

12 Factor containers

The 12 Factors for designing applications were established before containerization became mainstream. Kubernetes, containers, and cloud-native application frameworks have evolved and become how we build apps today, but the 12 Factors still apply in today's landscape. Keeping the 12 Factors top of mind while architecting your cloud-native applications ensures that your applications remain agile, scalable, and portable for years to come.

Image
Staircase with the number twelve in a circle
The 12 Factor App methodology is an influential pattern to designing scalable application architecture. Here is what that means for application architects and their architecture.
Author’s photo

Brandon Cox

Brandon Cox is a Staff Solution Architect who has spent his entire professional career working in open source technology. He has focused his attention over the past 10 years on helping the US Federal Government adopt modern development practices and technology to better support its mission.  More about me

Navigate the shifting technology landscape. Read An architect's guide to multicloud infrastructure.

OUR BEST CONTENT, DELIVERED TO YOUR INBOX

Privacy Statement