In Part 1 we demonstrated how to set up a Red Hat OpenStack Ansible environment by creating a dynamic Ansible inventory file (check it out if you've not read it yet!).
Next, in Part 2 we demonstrate how to use that dynamic inventory with included, pre-written Ansible validation playbooks from the command line.
Time to Validate!
The openstack-tripleo-validations RPM provides all the validations. You can find them in /usr/share/openstack-tripleo-validations/validations/ on the director host. Here's a quick look, but check them out on your deployment as well.
With Red Hat OpenStack Platform we ship over 20 playbooks to try out, and there are many more upstream. Check the community often as the list of validations is always changing. Unsupported validations can be downloaded and included in the validations directory as required.
A good first validation to try is the ceilometerdb-size validation. This playbook ensures that the ceilometer configuration on the Undercloud doesn’t allow data to be retained indefinitely. It checks the metering_time_to_live and event_time_to_live parameters in /etc/ceilometer/ceilometer.conf to see if they are either unset or set to a negative value (representing infinite retention). Ceilometer data retention can lead to decreased performance on the director node and degraded abilities for third party tools which rely on this data.
Now, let’s run this validation using the command line in an environment where we have one of the values it checks set correctly and the other incorrectly. For example:
[stack@undercloud ansible]$ sudo awk '/^metering_time_to_live|^event_time_to_live/' /etc/ceilometer/ceilometer.conf metering_time_to_live = -1 event_time_to_live=259200
Method 1: ansible-playbook
The easiest way is to run the validation using the standard ansible-playbook command:
$ ansible-playbook /usr/share/openstack-tripleo-validations/validations/ceilometerdb-size.yaml
So, what happened?
Ansible output is colored to help read it more easily. The green “OK” lines for the “setup” and “Get TTL setting values from ceilometer.conf” tasks represent Ansible successfully finding the metering and event values, as per this task:
- name: Get TTL setting values from ceilometer.conf become: true ini: path=/etc/ceilometer/ceilometer.conf section=database key={{ item }} ignore_missing_file=True register: config_result with_items: - "{{ metering_ttl_check }}" - "{{ event_ttl_check }}"
And the red and blue outputs come from this task:
- name: Check values fail: msg="Value of {{ item.item }} is set to {{ item.value or "-1" }}." when: item.value|int < 0 or item.value == None with_items: "{{ config_result.results }}"
Here, Ansible will issue a failed result (the red) if the “Check Values” task meets the conditional test (less than 0 or non-existent). So, in our case, since metering_time_to_live was set to -1 it met the condition and the task was run, resulting in the only possible outcome: failed.
With the blue output, Ansible is telling us it skipped the task. In this case this represents a good result. Consider that the event_time_to_live value is set to 259200. This value does not match the conditional in the task (item.value|int < 0 or item.value == None). And since the task only runs when the conditional is met, and the task's only output is to produce a failed result, it skips the task. So, a skip means we have passed for this value.
For even more detail you can run ansible-playbook in a verbose mode, by adding -vvv to the command:
$ ansible-playbook -vvv /usr/share/openstack-tripleo-validations/validations/ceilometerdb-size.yaml
You’ll find an excellent and interesting amount of information is returned and worth the time to review. Give it a try on your own environment. You may also want to learn more about Ansible playbooks by reviewing the full documentation.
Now that you’ve seen your first validation you can see how powerful they are. But the CLI is not the only way to run the validations.
In the final part of the series we introduce validations with both the OpenStack scheduling service, Mistral, and the director web UI. Check back soon!
The “Operationalizing OpenStack” series features real-world tips, advice and experiences from experts running and deploying OpenStack.
About the author
Browse by channel
Automation
The latest on IT automation for tech, teams, and environments
Artificial intelligence
Updates on the platforms that free customers to run AI workloads anywhere
Open hybrid cloud
Explore how we build a more flexible future with hybrid cloud
Security
The latest on how we reduce risks across environments and technologies
Edge computing
Updates on the platforms that simplify operations at the edge
Infrastructure
The latest on the world’s leading enterprise Linux platform
Applications
Inside our solutions to the toughest application challenges
Original shows
Entertaining stories from the makers and leaders in enterprise tech
Products
- Red Hat Enterprise Linux
- Red Hat OpenShift
- Red Hat Ansible Automation Platform
- Cloud services
- See all products
Tools
- Training and certification
- My account
- Customer support
- Developer resources
- Find a partner
- Red Hat Ecosystem Catalog
- Red Hat value calculator
- Documentation
Try, buy, & sell
Communicate
About Red Hat
We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.
Select a language
Red Hat legal and privacy links
- About Red Hat
- Jobs
- Events
- Locations
- Contact Red Hat
- Red Hat Blog
- Diversity, equity, and inclusion
- Cool Stuff Store
- Red Hat Summit