Last year, we made available an experimental alpha Ansible Content Collection of generated modules using the AWS Cloud Control API to interact with AWS services. Although the Collection is not intended for production, we are constantly trying to improve and extend its functionality and achieve its supportability in the future.
In this blog post, we will go over what else has changed and highlight what’s new in the 0.3.0 release of this Ansible Content Collection.
Forward-looking Changes
Much of our work in release 0.3.0 focused on releasing several new enhancements, clarifying supportability policies, and extending the automation umbrella by generating new modules. Let’s deep dive into it!
New boto3/botocore Versioning
The amazon.cloud Collection has dropped support for botocore<1.28.0
and boto3<1.25.0
. Most modules will continue to work with older versions of the AWS Software Development Kit (SDK), however, compatibility with older versions of the AWS SDK is not guaranteed and will not be tested.
New Ansible Support Policy
This Collection release drops support for ansible-core<2.11
. In particular, Ansible Core 2.10 and Ansible 2.9 are not supported. For more information, visit Ansible release documentation.
New Modules Highlights
This release brings a set of newly supported modules. They provide new exciting capabilities facilitating automation for various workloads and use cases.
Scaling your applications faster with the new AutoScaling modules
Perhaps you have a web application that experiences a surge in traffic at certain times of the day. Without autoscaling, you would need to manually add more EC2 instances to handle the extra traffic, which can take time and money. However, with amazon.cloud.autoscaling_launch_configuration
, policies can be set to automatically add or remove EC2 instances based on specific criteria. An example is shown below:
- name: Create an AWS AutoScaling LaunchConfiguration
amazon.cloud.autoscaling_launch_configuration:
state: present
image_id: my-ami-id
instance_type: t2.micro
key_name: my-keypair
security_groups:
- sg-1234567891abcdef
- sg-abcdef1234567891
user_data: |
#!/bin/bash
echo "Hello, world!" >> /var/log/myapp.log
block_device_mappings:
- device_name: /dev/sda1
ebs:
volume_size: 20
volume_type: gp2
delete_on_termination: true
Suppose you have an AutoScaling group that launches EC2 instances to handle requests for your web application. When a new instance is launched, you need to perform some custom actions, such as configuring the instance with the correct software and joining it to a load balancer. To perform these actions, you can configure the lifecycle hook with amazon.cloud.autoscaling_lifecycle_hook
to trigger a notification when a new instance is launched. The notification can trigger an AWS Lambda function or an Amazon SNS topic that performs the custom actions. An example is shown below:
- name: Create an AWS AutoScaling LifecycleHook
amazon.cloud.autoscaling_lifecycle_hook:
auto_scaling_group_name: my-auto-scaling-group
lifecycle_transition: autoscaling:EC2_INSTANCE_TERMINATING
heartbeat_timeout: 300
lifecycle_hook_name: my-lifecycle-hook
notification_target_arn: arn:aws:sns:us-east-1:123456789012:my-topic
role_arn: arn:aws:iam::123456789012:role/my-role
Perhaps you have a web application that experiences a sudden surge in traffic during certain times of the day and you need to quickly add more instances to handle the load. In addition, you want to minimize the time it takes for new instances to become available and start serving traffic. To achieve this, amazon.cloud.autoscaling_warm_pool
allows you to configure the warm pool to add pre-warmed instances to the group as soon as the demand increases, rather than waiting for new instances to be launched and configured. An example is shown below:
- name: Create an AWS AutoScaling WarmPool
amazon.cloud.autoscaling_warm_pool:
auto_scaling_group_name: my-auto-scaling-group
max_group_prepared_capacity: 2
min_size: 1
pool_state: Stopped
warm_pool_id: my-warm-pool
warm_pool_type: EC2
Easy and simple deployment of your containers with the new Elastic Container Services (ECS) modules
Suppose you have a microservices-based application that runs in containers and you need to deploy and manage the containers efficiently. You also need to scale the containers according to the application load to meet user demand. amazon.cloud.ecs_cluster
helps you to create a cluster and register instances (EC2 instances or Fargate) in the cluster. You can then deploy your containers onto the cluster and manage them using ECS. Here's an example:
- name: Create an ECS cluster
amazon.cloud.ecs_cluster:
cluster_name: my-ecs-cluster
Suppose you have a large-scale application that requires a lot of computational resources to handle variable demand. You want to be able to scale up or down the computational capacity of your ECS cluster based on the demand, but you also want to reduce costs when demand is low. To achieve this, you can use AWS ECS capacity providers using amazon.cloud.ecs_capacity_provider
. Capacity providers are used to manage the capacity of the ECS cluster infrastructure. You can associate a capacity provider to your ECS cluster using amazon.cloud.ecs_cluster_capacity_provider_association
. Here's an example:
- name: Create an ECS CapacityProvider
amazon.cloud.ecs_capacity_provider:
auto_scaling_group_provider:
name: my-capacity-provider
auto_scaling_group_arn: arn:aws:autoscaling:us-east-1:123456789012:autoScalingGroup:12345678-1234-1234-1234-123456789012:autoScalingGroupName/my-auto-scaling-group
capacity_provider: EC2
- name: Create an ECS Cluster CapacityProvider Association
amazon.cloud.ecs_cluster_capacity_provider_association:
cluster: my-cluster
capacity_providers:
- my-capacity-provider
default_capacity_provider_strategy:
- capacity_provider: my-capacity-provider
weight: 1
Ease your multi-cloud deployment with the new Elastic Container Repository (ECR) modules
Perhaps you have a multi-cloud deployment where you need to use the same container images on multiple cloud providers. You can easily create an ECR repository using amazon.cloud.ecr_repository
to store container images. Here’s an example:
- name: Create an ECR Repository
amazon.cloud.ecr_repository:
repository_name: my-web-app
image_scanning_configuration:
scan_on_push: true
Enhance your web application with the new WAFv2 modules
Suppose you have a web application that is prone to malicious attacks. To protect the web application, you want to deploy an AWS WAFv2 firewall in front of it to filter out malicious traffic. For this purpose, you can create an AWS WAFv2 web ACL and associate it with a load balancer or API gateway using amazon.cloud.wafv2_web_acl_association
. An example is shown below:
- name: Create a WAFv2 WebACLAssociation
amazon.cloud.wafv2_web_acl_association:
resource_arn: 'arn:aws:elasticloadbalancing:${AWS::Region}:${AWS::AccountId}:loadbalancer/app/my-load-balancer/1234abcd'
web_acl_arn: my-web-acl
You might also want to block traffic from address ranges such as 192.0.2.0/24
and 203.0.113.0/24
using amazon.cloud.wafv2_ip_set
.
- name: Create a WAFv2 IPSet
amazon.cloud.wafv2_ip_set:
name: MyIPSet
description: A set of IP addresses to block
ip_address_version: IPV4
scope: REGIONAL
addresses:
- 192.0.2.0/24
- 203.0.113.0/24
You can also monitor and analyze the traffic blocked or allowed by the web ACL using the amazon.cloud.wafv2_logging_configuration
module. This module allows you to specify which AWS resource to send the logs to, as well as the format and fields to include in the logs. Additional protection can be implemented using amazon.cloud.wafv2_regex_pattern_set
. The AWS WAFv2 RegexPatternSet
blocks traffic that matches regular expressions. An example is shown below:
- name: Create a WAFv2 RegetPatternSet
amazon.cloud.wafv2_regex_pattern_set:
name: my-regex-pattern-set
description: A set of regex patterns to block
regular_expression_list:
- ^[0-9]{5}$
- ^[A-Z]{2}-[0-9]{4}$
Lighten up your Log processing by using metric filters
One of the most common uses of Amazon CloudWatch is monitoring EC2 instances. Amazon CloudWatch logs can accumulate large amounts of data, so it is important to be able to filter the log data according to your needs. Filtering is achieved through the use of metric filters that can be set using amazon.cloud.logs_metric_filter
as shown below.
- name: Create a Logs Metric Filter
amazon.cloud.logs_metric_filter:
filter_pattern: "[timestamp=*Z, request_id=\"*\", event]"
log_group_name: /aws/lambda/my-function
metric_transformations:
- metric_name: Requests
metric_namespace: my-namespace
metric_value: "1"
Automate operational tasks across your AWS resources using the new SSM modules
Suppose you have an EC2 instance that requires specific software to be installed and configured before it can be used. amazon.cloud.ssm_document
creates an SSM document that defines the steps to install and configure the software. Once the SSM document is created, it can be used to automate the installation and configuration process across multiple EC2 instances. For this purpose, you can create an SSM Run Command, which executes the SSM document on the targeted EC2 instances. An example is shown below:
- name: Create an SSM Document
amazon.cloud.ssm_document:
content:
schemaVersion: "2.2"
description: "My SSM Document"
mainSteps:
- action: "aws:runShellScript"
name: "myStep"
inputs:
runCommand:
- "echo 'Hello World'"
name: "my-ssm-document"
document_type: "Command"
Perhaps you also need to manage resource disaster recovery to maintain business continuity and minimize downtime. amazon.cloud.ssm_resource_data_sync
can be used to back up and restore resource data in case of a disaster or outage.
Improve your applications reliability and availability with the new EC2 placement group module
Perhaps you need to set up a workload that requires low-latency, high-bandwidth communication between instances. A cluster placement group can help you do this by placing instances close together in a single cluster within an availability zone to provide high-bandwidth, low-latency network performance. An example is shown below:
- name: Create an EC2 PlacementGroup
amazon.cloud.ec2_placement_group:
strategy: cluster
Simplify access management for AWS services with the new IAM instance profile module
Suppose you have an EC2 instance that needs to access an S3 bucket to upload and download files. You can create an IAM role with permissions to access the S3 bucket and attach it to an IAM instance profile created using amazon.cloud.iam_instance_profile
. Then you can launch the EC2 instance with the instance profile associated with it. An example is shown below:
- name: Create an IAM InstanceProfile
amazon.cloud.iam_instance_profile:
path: /
roles:
- my-role
Managing Your Data with Ease with the new Amazon RDS modules
This release also brings a bunch of new RDS modules such as:
amazon.cloud.rds_db_instance
- Creates and manages an Amazon DB instance.amazon.cloud.rds_db_subnet_group
- Creates and manages a database subnet group.amazon.cloud.rds_global_cluster
- Creates and manages an Amazon Aurora global database spread across multiple AWS Regions.amazon.cloud.rds_option_group
- Creates and manages an RDS option group.amazon.cloud.rds_db_cluster_parameter_group
- Creates and manages an RDS cluster option group.
One of our upcoming blog posts will be dedicated to RDS and will cover some detailed use case scenarios. Stay tuned!
Where to go next
We hope you found this blog helpful! But, more importantly, we hope it inspired you to try out the latest amazon.cloud Collection release and let us know what you think. Please stop by at the Ansible AWS IRC channel #ansible-aws on Libera.Chat to provide your valuable feedback or receive assistance with the amazon.cloud Collection.
- Come visit us at AnsibleFest, now a part of Red Hat Summit 2023.
- Missed out on AnsibleFest 2022? Check out the Best of AnsibleFest 2022.
- Self-paced lab exercises - We have interactive, in-browser exercises to help you get started with Ansible Automation Platform.
- Try Ansible Automation Platform free for 60 days.
About the author
Browse by channel
Automation
The latest on IT automation for tech, teams, and environments
Artificial intelligence
Updates on the platforms that free customers to run AI workloads anywhere
Open hybrid cloud
Explore how we build a more flexible future with hybrid cloud
Security
The latest on how we reduce risks across environments and technologies
Edge computing
Updates on the platforms that simplify operations at the edge
Infrastructure
The latest on the world’s leading enterprise Linux platform
Applications
Inside our solutions to the toughest application challenges
Original shows
Entertaining stories from the makers and leaders in enterprise tech
Products
- Red Hat Enterprise Linux
- Red Hat OpenShift
- Red Hat Ansible Automation Platform
- Cloud services
- See all products
Tools
- Training and certification
- My account
- Customer support
- Developer resources
- Find a partner
- Red Hat Ecosystem Catalog
- Red Hat value calculator
- Documentation
Try, buy, & sell
Communicate
About Red Hat
We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.
Select a language
Red Hat legal and privacy links
- About Red Hat
- Jobs
- Events
- Locations
- Contact Red Hat
- Red Hat Blog
- Diversity, equity, and inclusion
- Cool Stuff Store
- Red Hat Summit