Event-Driven Ansible, part of Red Hat Ansible Automation Platform, provides automated response to changing IT conditions. It can listen for and process third-party alerts and triggers, then take automation actions using Ansible Rulebooks. Event-Driven Ansible can be adapted to diverse needs to minimize human errors and address configuration drifts. Some of the IT operation tasks often handled by Event-Driven Ansible include creating or updating an incident ticket, executing a remediation, and/or managing users in an organization. In all cases, the goal is to bring consistency by providing building blocks for repetitive tasks execution.

Event-Driven Ansible content collections offer event source plugin implementations for common third party event sources, as well as Ansible Rulebooks to get started quickly with automation. These are available on Ansible automation hub and can be adapted to your own IT operations needs.

To this extent, Red Hat provides a certified content collection for Red Hat Insights. The collection uses an incoming webhook that listens for events triggered by Red Hat Hybrid Cloud Console, and provides a base for integrating Insights events in your automation projects. Events are triggered when configuration improvements are detected for your Red Hat Enterprise Linux (RHEL) systems. For example, Insights reports and offers remediations for performance, availability, stability and security, vulnerability response, malware detection, and compliance requirements. An end-to-end example on using the collection with Ansible Automation Platform to automate the creation of a ServiceNow incident ticket on malware detection is documented in a previous blog post: Red Hat Insights Collection for Event-Driven Ansible.

In this article, we approach the integration from a different perspective. Rather than listening for Insights events from the collection, we demonstrate how one can perform queries against Red Hat Insights APIs (Application Programming Interfaces) to retrieve previously triggered events. We provide sample code that can be adapted based on your specific requirements. This alternative solution offers additional flexibility for the integration in organizations with tight network security and firewall rules. Please keep in mind that the provided code is experimental and not supported by Red Hat.

Red Hat Insights certified collection for Event-Driven Ansible

Red Hat Insights provides one of the Red Hat Ansible Certified Content Collections for Event-Driven Ansible content, included in Ansible automation hub for subscribers. The collection contains an event source plugin for receiving events out of the Red Hat Hybrid Cloud Console. Looking at the associated open source code repository for the collection, the plugin exposes a secure TCP socket URL acting as a listener for incoming HTTP POST requests. Each request is expected to contain a JavaScript Object Notation (JSON) formatted body that is validated and processed. This method, also called webhook, is often used to connect event sources to automation solutions to drive event-driven automation.

Webhooks are often referred to as reverse APIs or push APIs. They are a common way to facilitate the integration of SaaS (Software-as-a-Service) products and the sharing of data between applications. One of the benefits of webhooks is that it eliminates the need for repetitively querying other application endpoints for data. When an update is available, the source application can trigger an event and push it in real-time to the webhook listener. From an efficiency standpoint, webhooks provide an advantage in bandwidth consumption and tend to result in waste reduction of resources and computing power.

From an integration perspective, using the Red Hat Insights certified collection, and its webhook implementation, is the most efficient and recommended way to proceed. There are however times in which such integration cannot be used in an organization as they may require firewall configuration changes that are not always easy to obtain.

Possible challenge with webhooks and enterprise firewalls

In a recent conversation with a customer, we worked on using the Red Hat Insights collection to integrate with an Ansible Automation Platform hosted within the organization’s local network. The environment this organization operates in is very secure. Any change required in terms of networking configuration must go through a strong policy and control process. In such an environment, using a webhook integration with push HTTP POST requests requires opening a port for incoming traffic received from outside the organization.

The information required to request the appropriate network additions for configuring firewall rules is fully documented in our Red Hat Customer Portal knowledge base article. With this in hand, network administrators can securely configure firewall rules allowing incoming traffic from Red Hat Hybrid Cloud Console servers for the purpose of this integration.

To build their use case for any approvals necessary, our interlocutors wanted to trial the integration as a proof-of-concept with a couple of services first, before rolling out the integration. Their network restriction was a blocker for incoming messages and they were not in a position to get firewall rules in place for a proof-of-concept.

In such a scenario, we looked at alternative approaches and investigated the possibility of querying Insights APIs from the Event-Driven Ansible source plugin to pull a stream of new events. We hoped that the success of the proof-of-concept would provide convincing business value arguments to get an approval from the network security team.

An alternative Event-Driven Ansible approach pulling data from Insights APIs

In the rest of this article, we explore the creation and use of a new event source plugin for Event-Driven Ansible based on pulling data from APIs rather than exposing an endpoint for sources to push to. This approach is well documented in a previous article in which an event source plugin is implemented by querying ServiceNow APIs for new records. We follow the same method to implement our custom event source plugin.

Red Hat Insights exposes a set of APIs that can be queried to retrieve data. The API documentation and a cheat-sheet are available to get started with a custom client implementation. We are interested in the notifications APIs which provide an endpoint for retrieving the full history of previously triggered events: /notifications/events

The endpoint provides fields that can be used to sort, filter or order the results, as well as specific fields to get additional data included in the response. Among them, and when set to true, includeDetails and includePayload allow us to get the full content of the events, including the sent JSON formatted payload. Finally, startDate and endDate fields let us specify a range query for our request.

We use this endpoint in our custom event source plugin to get the list and content of all Insights events received today. We can then forward these events on to automation for processing. The implementation of the custom event source plugin using the /notifications/events API is available on the GitHub code repository.

One point to highlight at the time of writing this blog is the use of service account token-based authentication in replacement for basic authentication for querying Insights APIs. This security enhancement was recently implemented for Hybrid Cloud Console. Service accounts are integrated with user access functionality to provide granular control over access permissions. Additional information and instructions to transition from basic authentication to token-based authentication via service accounts are documented in this knowledge base article.

The rest of the article describes how one can use the collection with Hybrid Cloud Console to retrieve the latest triggered events. Note that this alternative custom implementation is not supported by Red Hat. It should only be used as an example to build your own collection when trying to prototype integrations. Once the proof-of-concept is successful, one should opt for using the Red Hat Insights certified content collection, which is supported.

Configure a service account in Hybrid Cloud Console

We are creating a dedicated service account for this integration, and limit its user access scope to being able to read notification events. Note that you need to be an Organization Administrator for the account to create a new service account and perform user access configuration.

Navigate to ‘Settings > Identity and Access Management > Service Accounts’ in the Hybrid Cloud Console. Click on ‘Create new service account’ and provide a name and a description. Once submitted, HCC generates credentials for this service account that can be used to query its APIs. Ensure you copy the Client ID and Client secret provided as there is no way to retrieve the secret later on. These credentials are required for generating a token that is used to query Insights APIs.

Service accounts do not inherit default permissions as opposed to regular users (e.g. Default access groups). You must grant access to the service account by associating it to a user group and have the required permissions defined as part of a role association.

For the purposes of this integration, we create a new role by navigating to ‘Settings > User Access > Roles’ and clicking ‘Create role’. We give the role a name and description, and look up and select the notifications:events:read permission in the next screen. Clicking Next and Submit creates the new group.

Next, we create a new user group by navigating to ‘Settings > User Access > Groups’  and clicking ‘Create group’. We give this group a name and description, and select the newly created role in the next step to be added to the group. The next ‘Add members’ page can be left untouched as the service account is associated from a different location. Clicking Next and Submit creates the new group.

At this point we can review the newly created group by clicking on its name. Navigating to the ‘Service accounts’ tab and clicking on the ‘Add service account’ allows us to look up and associate our new service account with the group. From now on, the service account can be used to query the Notifications APIs with read permission on events. Next, we validate that the credentials work by passing them to the collection.

Validating the collection using Python

The eda-insights-pulling collection includes a README.md file with all requirements to run the example source plugin. The new_events.py source plugin accepts optional arguments: Hybrid Cloud Console server URL, HTTP proxy URL, and authentication server URL for providing access tokens. Additionally, mandatory arguments are required: client id and client secret which are necessary to generate an access token for your service account.

The source plugin can be executed from the command line. This allows testing the authentication and Insights API query. First, we set some environment variables where <client id> and <client secret> can be replaced by your credentials obtained when setting up your service account in Hybrid Cloud Console:

export HCC_CLIENT_ID=<client id>
export HCC_CLIENT_SECRET=<client secret>

We are now ready to try our source plugin by running the following command:

python new_events.py

In case of failure, the script exits with an error code. A 401 Unauthorized error is likely caused  by wrong credentials passed when requesting an access token for your service account. You should ensure that the <client id> and <client secret> are correctly passed to the script. If the error persists, you probably need to regenerate the credentials from the service account page as it is impossible to retrieve the secret after the initial creation. A 403 Forbidden error is likely caused by wrong permissions for your service account. You should check that the service account is well associated with a group that has a role with notifications:events:read permission.

If successful, the script queries for new events from the Notifications API every 60 seconds and displays their payload on the terminal. Any new events triggered are displayed. This validates that the source plugin works and can be used as part of Event-Driven Ansible.

Validating the collection using Ansible

With the source plugin successfully retrieving new events, we can now use it as part of an Event-Driven Ansible rulebook and run automation for each event received.

Our collection contains a new_events_rulebook.yml rulebook that defines new_events as the source of events and all required parameters (e.g. HCC_CLIENT_ID and HCC_CLIENT_SECRET).

A simple playbook playbooks/new_event.yml is called for processing each event. In our example, the playbook extracts a few fields from the JSON formatted payload (event.bundleevent.applicationevent.event_type, and event.events) and displays them on the output.

We use the ansible-rulebook command (CLI component of Event-Driven Ansible) and pass the environment variables defined earlier. The -S parameter indicates the location to find the source plugin. The --print-events parameter displays on the terminal the details of the event that triggers the action. This is useful for troubleshooting as it displays the event being triggered before the playbook output.

ansible-rulebook --rulebook new_events_rulebook.yml \
       -i inventory.yml \
       -S . \

By running this command, we instruct Event-Driven Ansible to run the source plugin to retrieve new events every 60 seconds, and forward each new event payload to the playbooks/new_event.yml playbook. The result is similar to the Python script, except that it is now encapsulated in the Event-Driven Ansible output, including the display of the event and the playbook output once processed.

With this flow in place, one can now pull events from Insights APIs in its Event-Driven Ansible implementation as an alternative solution to using a listener waiting for push events. The payload content and processing of events remains the same. An end-to-end example for creating a ServiceNow incident ticket based on the payload of a received event is documented in a previous blog post. Further, the blog documents required steps to define a project and expose a decision environment in an Event-Driven Ansible controller. These steps can be replicated and adapted with this alternative collection for further automation.


In this article, we present an alternative approach to integrating Red Hat Insights with Ansible Automation Platform’s Event-Driven Ansible. We highlight the limitation of the existing certified collection in a real-world scenario, and offer a different implementation that can be used for demonstrating capabilities and benefits of the integration. This alternative solution brings more flexibility in organizations with network security constraints and allows for setting up a proof-of-concept in support of a business proposal for adopting your integration initiative. Please keep in mind that the provided code is experimental and not supported by Red Hat.

Although the described issue relates to restrictive firewall rules for internal networks, similar problems can exist with hosted SaaS offerings. A similar approach consisting of pulling events from the API rather than pushing them can be followed in custom implementations. Finally, one can think of modifying the solution for querying different endpoints from Red Hat Insights and Hybrid Cloud Console APIs. This is especially useful if your organization is looking at retrieving data that is not yet exposed as an Insights event. The custom collection can query the endpoint for data and process each entry using Ansible automation according to your own requirements.

We hope that this article and solution provides food for thought for your integration initiatives, and offer a base for building your implementation from. The original idea for this alternative approach comes from a direct customer interaction describing a concrete constraint. We are always looking for feedback and discussions on the barriers that you are experiencing adopting Red Hat solutions. The easiest way to get in touch is to use the ‘Feedback’ button on Hybrid Cloud Console to reach out with any comment. We look forward to discussing your ideas and collaborating with you.

Call to action

About the author

Jerome Marc is a Red Hat Sr. Principal Product Manager with over 15 years of international experience in the software industry spanning product management and product marketing, software lifecycle management, enterprise-level application design and delivery, and solution sales.

Read full bio