One of the things to think about when designing promotion pipelines for applications deployed in OpenShift is a strategy for managing environment-dependent properties.

When an application is promoted through environments (such a DEV, IT, QA) some of its configuration properties need to change, for example, the connection string to a database or the URL of a called web service.

How an application expects to read its configurations is completely application-dependent. That said, over the course of several projects we have seen some patterns emerge that we have found to be successful. There is no better or worse approach - it is the responsibility of the pipeline designer to choose the best approach for a given context.

This blog post focuses on environment-dependent properties, but the same approaches could be potentially used for all properties, whether or not they are environment-dependent. Credential management is out of the scope of this blog post.

Here are four approaches that that cover the most common use cases.

Using Environment Variables

The properties are passed by OpenShift as environment variables. The following is a fragment of how a such a pod template (or higher structures such as replication controllers and deployment configs) would be configured:

      env:
- name: MY_EXTERNAL_ENDPOINT
value: http://xxx.yyy.zzz

Your application needs to be trained to read its properties this way. If you are using Spring, there are several ways of achieving this. Here is one example:

 @Value("#{systemEnvironment['MY_EXTERNAL_ENDPOINT']}")
private String myExternalEndpoint;

Things to keep in mind when adopting this approach are that if the value of a property changes, your CI/CD workflow needs to be able to change it in the pod template and if a property is added or removed, your CI/CD workflow needs to be able to manage this event appropriately.

In general, this approach does not scale well when you have more than a dozen of properties.

Using One Environment Variable to Determine the Environment

In this approach, we use an environment variable to determine in which environment the application is being deployed. The pod template would like the following:

      env:
- name: ENV
value: {IT|QA|PROD}

The docker image of your application will need to contain all the properties for each possible environment and to be able to select the right configuration file based on the ENV environment variable.

Using Spring, this can be elegantly achieved using profiles. Here is an example:
You will have several config classes, one per environment with the following annotations:

 @Configuration
@PropertySource("file:///<well-known-location>/application-dev.properties")
@Profile("dev")

Then this pod configuration to activate the right profile:

      env:
- name: SPRING_ACTIVE_PROFILE
value: dev

Notice: in this case, we don’t even need the ENV variable

This approach makes your CI/CD easier to implement because now you don’t have to manage changing the values of the properties or adding and removing properties.

This approach falls short when the number and type of your environments start to change (for example because you start provisioning environments dynamically).

ConfigMap

ConfigMap is an OpenShift API object that is managed by OpenShift with the purpose of injecting application configurations. You can create a ConfigMap to contain a properties file as follows:

 oc create configmap my-app-config --from-file=path/to/application.properties

This config map will be mounted as a file called application.properties in a directory that you can configure in your pod template. Here are the relevant sections:

      volumes:
- name: my-app-config
configMap:
name: my-app-config
...
env:
- name: CONFIG_LOCATION
value: /etc/myapp/config
volumeMounts:
- name: my-app-config
mountPath: /etc/myapp/config

Notice that we use an environment variable to pass the location of the properties file to the application. This environment variable must match the volume mount point and never needs to change.
The application will need to initialize its properties by looking at the environment variables that specified the location of the properties. If you are using Spring, this can be done as follows:

 @PropertySource("file:///${CONFIG_LOCATION}/application.properties")

This approach allows for easily sharing properties between different applications because they can all mount the same configmap.

With this approach, you have to manage an additional OpenShift API object in your CI/CD pipeline: the configmap. Someone or some process must be able to create them and update them when properties are changed, added, or removed.

Config Store Service

In this approach, a service that we will refer to as config store is servicing configurations for one or more apps. Presumably, this server can manage different environments so when an application starts it will call the service passing its identification and the environment for which it wants the configuration and the service will respond with the appropriate properties.

Environment and config service endpoint will have to be passed using one of the previous methods. If using environment variables it the pod template would look as follows:

      env:
- name: ENV
value: {IT|QA|PROD}
- name: CONFIG_URL
value: http://myconfig.xxx.yyy

This method is appropriate when application properties need to be reloaded at runtime. One shortcoming of this method is that your application now has a dependency on an external service (which may be down) to start. To overcome this limit, you should have defaults for all your properties so that your application can start even if the property store service is down.

Archaius is a NetFlix library designed to aggregate properties from different sources and ping these sources at regular interval to look for updates. It integrates with Spring Cloud.

Conclusions

As said in the introduction, none of these methods is better than the others. The choice of which one to use should be done on a case-by-case basis. One more thing to keep in mind is that these methods can be combined. If you are planning to port a large number of applications to OpenShift and your applications are all built using the same framework and basic architectures, then it may be a good idea to standardize on one of these methods.

{{cta('1ba92822-e866-48f0-8a92-ade9f0c3b6ca')}}


关于作者

Raffaele is a full-stack enterprise architect with 20+ years of experience. Raffaele started his career in Italy as a Java Architect then gradually moved to Integration Architect and then Enterprise Architect. Later he moved to the United States to eventually become an OpenShift Architect for Red Hat consulting services, acquiring, in the process, knowledge of the infrastructure side of IT.

Currently Raffaele covers a consulting position of cross-portfolio application architect with a focus on OpenShift. Most of his career Raffaele worked with large financial institutions allowing him to acquire an understanding of enterprise processes and security and compliance requirements of large enterprise customers.

Raffaele has become part of the CNCF TAG Storage and contributed to the Cloud Native Disaster Recovery whitepaper.

Recently Raffaele has been focusing on how to improve the developer experience by implementing internal development platforms (IDP).

UI_Icon-Red_Hat-Close-A-Black-RGB

按频道浏览

automation icon

自动化

有关技术、团队和环境 IT 自动化的最新信息

AI icon

人工智能

平台更新使客户可以在任何地方运行人工智能工作负载

open hybrid cloud icon

开放混合云

了解我们如何利用混合云构建更灵活的未来

security icon

安全防护

有关我们如何跨环境和技术减少风险的最新信息

edge icon

边缘计算

简化边缘运维的平台更新

Infrastructure icon

基础架构

全球领先企业 Linux 平台的最新动态

application development icon

应用领域

我们针对最严峻的应用挑战的解决方案

Virtualization icon

虚拟化

适用于您的本地或跨云工作负载的企业虚拟化的未来