Subscribe to the feed

If you are using JBoss Enterprise Application Platform (EAP) for J2EE development, the CloudBees Jenkins Platform provides an enterprise-class toolchain for an automated CI/CD from development to production.

The CloudBees Jenkins Platform now supports integrations with both Red Hat JBoss Enterprise Application Platform (EAP) and Red Hat OpenShift across the software delivery pipeline. This enables developers to build, test and deploy applications, with Jenkins-based continuous delivery pipelines in JBoss via JBoss EAP 7 or JBoss EAP 7 on OpenShift.  

The following examples are based in Jenkins Pipeline plugins, which create complex pipelines, if needed, , to model their software delivery process. If you are not familiar with with the CloudBees Jenkins Pipeline plugin you may find  these two blog posts  helpful for ramping up: Using the Pipeline Plugin to Accelerate Continuous Delivery -- Part 1 and Part 2.

Let's get started. In a typical CI/CD pipeline your process would be similar to this one:



  • Developers commit code to the SCM, which will notify Jenkins via webhooks.
  • Jenkins compiles the code and execute a series of test on it: static code analysis, code metrics, unit testing, etc.
  • If everything goes well, Jenkins would deploy the code to a development environment.  This step typically /may  require a manual approval depending on the use of that environment. A typical use case is having the application deployed just to be able to run further validations with tools like Selenium.
  • The steps that follow would promote the application between the various environments and to validate that the deployment was correct.

Let's see how the build, deployment and promotion between the various environment can be done for both types of JBoss installs, to JBoss EAP7 and to JBoss EAP 7 on OpenShift,  and the differences between them.

Build and Deployment

JBoss EAP 7

In this case, we will take the produced artifact and directly deploy it to the server. There are two possible approaches to get the code deployed: JBoss CLI and Maven Wildfly Plugin.

Both of them produce very similar results, but our recommended approach is to use the JBoss CLI, as it will give you maximum flexibility and allow the use of a common tool between development and operations teams:

checkpoint 'Deploy to QA'
//we use a docker image that comes with the JBoss CLI preinstalled
docker.image('').inside {
 // Deploy to JBoss
 def destinationWarFile = "movieplex-${env.BUILD_NUMBER}.war"
 def versionLabel = "movieplex#${env.BUILD_NUMBER}"
 def description = "${env.BUILD_URL}"
 withCredentials([[$class: 'UsernamePasswordMultiBinding',
                   credentialsId: 'jboss-ec2',
                   passwordVariable: 'password',
                   usernameVariable: 'username']]) {

  sh "/opt/eap/bin/ --connect --user=${env.username} --password=${env.password} --command='deploy target/movieplex.war --force'"


sleep 10L // wait for JBOSS to update the status

//Check for correct deployment
timeout(time: 5, unit: 'MINUTES') {
 waitUntil {
withCredentials([[$class: 'UsernamePasswordMultiBinding',
                   credentialsId: 'jboss-ec2',
                   passwordVariable: 'password',
                   usernameVariable: 'username']]) {
   sh "/opt/eap/bin/ --connect --user=a${env.username} --password=${env.password} --command='ls /deployment=movieplex.war'> .jboss-status"

   // parse output
   def jbossStatus = readFile(".jboss-status")
   println "$jbossStatus"
   return jbossStatus.toLowerCase().contains("status=ok")

  • With the CLI we call deploy command with the produced war file as a parameter. Jenkins Credential store is being used to ensure that the credentials are kept in a secure way. The CLI will upload the file and deploy it to the server.
  • After the deploy is finished we check for correctness.

This process can be made as complex as needed leveraging any of the needed commands from the CLI, introducing changes to the database, deploying additional artifacts, launching a Selenium script to check the application is running, etc.

JBoss EAP 7 on OpenShift

In the case of OpenShift, the approach is slightly different. Instead of producing the artifact in Jenkins anduploading it to JBoss, we will start a build in OpenShift.

This will trigger Source-to-Image (S2I), which will pull the source code, compile it and create a docker image with JBoss and the newly produced application. This image will then be pushed to the OpenShift’s internal Docker Registry.

Calling OpenShift from Jenkins is easily done by using CloudBees OpenShift CLI, which provides an automatic installer for the CLI, integration with Jenkins credentials for login and a wrapper to execute CLI commands from a pipeline.

wrap([$class: 'OpenShiftBuildWrapper', 
            url: '',
            credentialsId: 'openshift-credentials'
            ]) {

            // oc & source2image
            sh """
            oc project movieplex
            oc start-build j2ee-application-build --wait=true

In the previous example, we login into OpenShift, change to the desired project and launch the build.

After the build is finished, we have two possible approaches to deploy the application:

  • Enable image change triggers. In this case an automatic deployment will be done after a successful build. In this case we only execute a command to check for correct deployment.
//with build triggers enabled get the current deploy number
sh "oc deploy frontend > .openshift-deploy-number"


  • Disable image change triggers and control the deployment from Jenkins.  This would allow doing additional tasks before actually deploying. This is similar to sending an email for notification, manual validation, waiting for the best moment, etc.


//with build triggers disabled request a deploy with the latest build
sh "oc deploy frontend --latest > .openshift-deploy-number"

After this, all that is left to do is check the status of the deployment:

def deployMessage = readFile(".openshift-deploy-number")
def deployNumber = deployMessage.substring(deployMessage.indexOf('#')).tokenize()[0]
echo "$deployMessage"
            // Wait for OpenShift deployment
            timeout(time: 5, unit: 'MINUTES') {
                waitUntil {
                    sh "oc deploy frontend > .openshift-build-status"
                    // parse `describe-environment-health` output
                    def openshiftDeployStatus = readFile(".openshift-build-status")
                    echo "Checking: '$deployNumber deployed'"
                    def isDeployed = openshiftDeployStatus.indexOf("$deployNumber deployed")
                    echo "$openshiftDeployStatus"
                    echo "$isDeployed"
                    return isDeployed>0


There could be some cases in which the organization may prefer not to use S2I, for these cases another possibility is to produce a docker image from Jenkins, deploy it to OpenShift and then promote that same image between environments.

Promotion between environments

Once the application is built and deployed to Dev it will have to be promoted to the next environments after each validation stage.


In the case of JBoss EAP 7 we will execute the exact same command for each environment, deploying the same war on each one of them.


For JBoss EAP 7 on OpenShift the recommended approach is to promote the same image from one environment to the other, by tagging the original image. Each environment will point its deployment config to a tag, e.g test, QA and production with a trigger to deploy when a change is detected. In that case by just issuing this command, the image will be promoted from dev to test, QA and production respectively:

//promote to test
oc tag jboss-myapp-image:latest jboss-myapp-image:test

//promote to qa
oc tag jboss-myapp-image:test jboss-myapp-image:qa

//promote to production
oc tag jboss-myapp-image:qa jboss-myapp-image:prod

We hope these examples of Continuous Delivery with the CloudBees Jenkins Platform to Red Hat JBoss EAP 7 provided you with a base understanding to get you started.

For more information on our integrations with JBoss and other Red Hat Platforms, visit


About the author

Deon Ballard is a product marketing manager focusing on customer experience, adoption, and renewals for Red Hat Enterprise Linux. Red Hat Enterprise Linux is the foundation for open hybrid cloud. In previous roles at Red Hat, Ballard has been a technical writer, doc lead, and content strategist for technical documentation, specializing in security technologies such as NSS, LDAP, certificate management, and authentication / authorization, as well as cloud and management. She also wrote and edited the Middleware Blog for Red Hat and led portfolio solution marketing for integration and business automation.

Read full bio

Browse by channel

automation icon


The latest on IT automation for tech, teams, and environments

AI icon

Artificial intelligence

Updates on the platforms that free customers to run AI workloads anywhere

open hybrid cloud icon

Open hybrid cloud

Explore how we build a more flexible future with hybrid cloud

security icon


The latest on how we reduce risks across environments and technologies

edge icon

Edge computing

Updates on the platforms that simplify operations at the edge

Infrastructure icon


The latest on the world’s leading enterprise Linux platform

application development icon


Inside our solutions to the toughest application challenges

Original series icon

Original shows

Entertaining stories from the makers and leaders in enterprise tech