As cloud computing and automation with Amazon Web Services (AWS) infrastructure continues to evolve, the latest version of the Red Hat Ansible Certified Collection for amazon.aws 9.0.0 brings a range of updates. These updates are designed to streamline user workflows and speed up the shift from development to production environments.
In this blog post, you’ll learn about the key features introduced in the Red Hat Ansible Certified Content Collection for amazon.aws 9.0.0.
New feature highlights
This release brings several new features. Let’s take a look!
amazon.aws.cloudwatchlogs_log_group_metric_filter (collection module)
The `amazon.aws.cloudwatchlogs_log_group_metric_filter`
module now supports setting `unit`
and `dimensions`
. The `unit`
parameter specifies the measurement unit for the metric values, such as milliseconds, seconds, bytes and count. Setting the unit correctly enables Amazon CloudWatch to interpret and display the metric data accurately. The `dimensions`
parameter allows you to assign additional context or labels to metrics, making it possible to categorize, filter and analyze the metrics based on these labels.
Use case scenario: Monitoring multitenant API latency with precise units and dimensions
You need to host a multitenant API-driven application on AWS and rely on CloudWatch to monitor various performance metrics. Low latency for API responses is crucial to maintaining SLAs, especially for key tenants. You want to track API request latency precisely in milliseconds and categorize the metrics by TenantID and Environment (production or staging) for deeper insights and quicker troubleshooting. Using automation to deploy these configuration updates enables a consistent and scalable solution.
To achieve this CloudWatch configuration, you need to use the new `unit`
option in the `amazon.aws.cloudwatchlogs_log_group_metric_filter`
module to set the latency in milliseconds and use `dimensions` to categorize metrics by TenantID and Environment. This scenario demonstrates how using both `unit` and `dimensions` in `amazon.aws.cloudwatchlogs_log_group_metric_filter`
enables you to monitor API latency in a highly detailed and cost-effective way, tailored for a multitenant setup.
- name: Create metric filter for API latency by tenant with specific units and dimensions
amazon.aws.cloudwatchlogs_log_group_metric_filter:
log_group_name: "{{ log_group_name }}"
filter_name: "{{ filter_name }}"
filter_pattern: '{ $.event_type = "API Request" && $.latency = * }'
metric_transformations:
- metric_name: "TenantAPILatency"
metric_namespace: "SaaSApp/Performance"
metric_value: "$.latency"
unit: "Milliseconds"
dimensions:
TenantID: "$.tenant_id"
Environment: "$.env"
state: present
amazon.aws.rds_instance (collection module)
The `amazon.aws.rds_instance`
module now includes a new option, `multi_tenant: true`
, enabling support for multitenant container databases (CDBs). This feature allows you to manage Amazon Relational Database Service (RDS) instances more efficiently by using a shared CDB to support multiple tenants within a single database instance. With this setup, you can optimize resource usage, reduce operational overhead and maintain data isolation for each tenant, all within the same physical infrastructure. This new capability is particularly beneficial for applications requiring secure, scalable multitenancy.
Use case scenario: Multitenant SaaS application with multitenant CDB support
You are managing a Software-as-a-Service (SaaS) application that serves multiple clients, each with different data storage needs. The architecture uses a multitenant database design in Amazon RDS where multiple tenants share the same CDB, but their data is stored in separate pluggable databases (PDBs) for isolation and security.
To automate the enablement of multitenant CDBs support for efficient multitenant management, set `multi_tenant: true`
in the `amazon.aws.rds_instance`
module configuration. Here’s an example of how to provision a multitenant RDS instance with CDB support:
- name: Provision Multi-Tenant RDS instance with CDB support
amazon.aws.rds_instance:
db_instance_identifier: "{{ db_instance_name }}"
db_instance_class: "db.m6g.large"
allocated_storage: 100
engine: "oracle-se2-cdb"
multi_tenant: true
storage_type: "gp2"
vpc_security_group_ids:
- "{{ sg_group_id }}"
state: "present"
amazon.aws.ec2_instance (collection module)
Previously, resizing Amazon EC2 instances (upgrading or downgrading instance types) required manually stopping the instances, modifying the instance type and carefully tracking each instance’s state to ensure it returned to the desired status. This multistep process was both time-consuming and error prone.
With the updated `amazon.aws.ec2_instance`
module, you can now automate the entire process: stopping an EC2 instance, modifying its type and validating that it reaches the specified state (running, stopped) in a single streamlined operation. This new feature simplifies EC2 instance management, providing greater flexibility to optimize both performance and costs by adjusting instance types as needed.
Use case scenario: Managing EC2 instance types for cost optimization and performance scaling
You manage dynamic infrastructure on AWS, hosting multiple EC2 instances to support a web application. As the application grows over time, resource demands fluctuate—at times requiring more CPU and memory to handle increased load and at other times, during periods of low demand, needing fewer resources to reduce costs.
As the web application experiences a sudden increase in traffic (during peak usage hours), you want to upgrade your EC2 instances to a larger type to accommodate the higher resource demands. Using the `amazon.aws.ec2_instance`
module, you can automate this process, ensuring that the instances are stopped, modified to the new instance type and started again with minimal downtime.
- name: Upgrade EC2 instance type for peak usage
amazon.aws.ec2_instance:
name: "{{ ec2_instance_name }}"
image_id: "{{ ec2_ami_id }}"
instance_type: "t3.large" # Upgrade instance type to a larger size
state: "present"
subnet_id: "{{ subnet.id }}"
wait: true
amazon.aws.ec2_vpc_route_table (collection module)
The `amazon.aws.ec2_vpc_route_table`
module now supports setting the `transit_gateway_id`
in the `routes`
parameter. With this enhancement, you can now route traffic through an AWS Transit Gateway (TGW) by specifying the `transit_gateway_id`
as a target for your routes. This feature simplifies the management of inter-VPC communication and connectivity to on-premise resources, making it easier to centralize and automate routing configurations within your network architecture.
Use case scenario: Centralized network routing with AWS Transit Gateway
You need to manage a large-scale, multiregion architecture on AWS, where different Virtual Private Clouds (VPCs) are deployed across several regions to handle different workloads. These VPCs need to communicate with each other as well as with on-premise resources. To simplify this architecture and avoid complex peering relationships, you implement an AWS Transit Gateway as the central hub for inter-VPC routing.
Once the Transit Gateway is set up and VPCs are attached, you can use the `amazon.aws.ec2_vpc_route_table`
module to add routes to the VPC’s route tables that direct traffic to the Transit Gateway. This step validates that the VPC knows how to route traffic to other VPCs or on-premise networks through the Transit Gateway.
- name: Create Transit Gateway
amazon.aws.ec2_transit_gateway:
name: "Central-TGW"
description: "Centralized Transit Gateway for all VPCs"
asn: 64512
state: "present"
register: transit_gateway
- name: Attach VPC to Transit Gateway
amazon.aws.ec2_transit_gateway_vpc_attachment:
name: "{{ vpc_attachment_name }}"
transit_gateway_id: "{{ transit_gateway.transit_gateway.transit_gateway_id }}"
vpc_id: "{{ vpc_id }}"
subnet_ids: "{{ subnet_ids }}"
purge_subnets: False
state: "present"
- name: Add a route to public route table
amazon.aws.ec2_vpc_route_table:
vpc_id: "{{ vpc.vpc.id }}"
tags:
Public: "true"
Name: Public route table
routes:
- dest: "0.0.0.0/0"
gateway_id: igw
- dest: ::/0
gateway_id: igw
- dest: "10.0.0.0/16"
transit_gateway_id: "{{ transit_gateway.transit_gateway.transit_gateway_id }}"
register: add_routes
New modules
This release brings with it a number of new supported modules that have been promoted from both community and Red Hat support. The functionality covered by these new modules supported by Red Hat include:
Module | Description |
autoscaling_instance_refresh | Start or cancel an EC2 Auto Scaling Group (ASG) instance refresh in AWS. |
autoscaling_instance_refresh_info | Retrieve information about EC2 Auto Scaling Group (ASG) instance refreshes in AWS. |
ec2_launch_template | Manage EC2 launch templates. |
ec2_placement_group | Manage EC2 placement groups. |
ec2_placement_group_info | List EC2 placement group(s) details. |
ec2_transit_gateway | Manage EC2 Transit Gateways. |
ec2_transit_gateway_info | Retrieve information about EC2 Transit Gateways in AWS. |
ec2_transit_gateway_vpc_attachment | Manage EC2 Transit Gateway VPC attachments. |
ec2_transit_gateway_vpc_attachment_info | Retrieve information about EC2 Transit Gateway VPC attachments. |
ec2_vpc_egress_igw | Manage AWS VPC egress-only internet gateways. |
ec2_vpc_nacl | Manage network access control lists (ACLs). |
ec2_vpc_nacl_info | Retrieve information about network ACLs in an AWS VPC. |
ec2_vpc_peering | Create, delete, accept, and reject VPC peering connections between two VPCs. |
ec2_vpc_peering_info | Retrieve information about AWS VPC peerings. |
ec2_vpc_vpn | Manage EC2 VPN connections. |
ec2_vpc_vpn_info | Retrieve information about EC2 VPN connections. |
elb_classic_lb_info | Retrieve information about EC2 classic elastic load balancers in AWS. |
In addition to the newly promoted modules, 4 more new modules have been added to the collection.
Module | Description |
autoscaling_instance | Manage instances associated with AWS AutoScaling Groups (ASGs). |
autoscaling_instance_info | Retrieve information about instances associated with AWS ASGs. |
ec2_launch_template_info | Retrieve information about EC2 launch templates. |
ec2_vpc_egress_igw_info | Retrieve information about AWS VPC egress-only internet gateways. |
In an upcoming blog post, we will showcase practical use case scenarios that take advantage of these newly supported modules. Stay tuned for some insightful tips!
New boto3/botocore versioning
The amazon.aws Collection has dropped support for `botocore<1.31.0`
and `boto3<1.28.0`
. Most modules will continue to work with older versions of the AWS software development kit (SDK), however, compatibility with older versions of the AWS SDK is not guaranteed and will not be tested. When using older versions of the AWS SDK, Red Hat Ansible Automation Platform will display a warning. Check out the module documentation for the minimum required version for each module.
New Python support policy
On July 30, 2022, AWS announced that the AWS Command Line Interface (AWS CLI) v1 and AWS SDK for Python (boto3 and botocore), will no longer support Python 3.8. Following the AWS SDKs support policy update that removes support for Python versions below 3.8 and to continue to support Red Hat customers with secure and maintainable tools, we aligned with this situation and deprecated support for Python versions lower than 3.8 in this release of the collection. However, full removal of Python versions below 3.8 is scheduled for version 10.0.0.
Deprecated Features
This collection release also introduces deprecations into the collection's modules.
Module | Description |
ec2_vpc_peer | The |
ec2_vpc_peering_info | The |
s3_object | Support for |
Breaking changes
This collection release also introduces some breaking changes into the collection's modules. These changes represent prior deprecations that have now been completely removed.
Module | Description |
aws_ec2 | The parameter |
kms_key kms_key_info | The |
Changes for developers
If you are an active contributor to the amazon.aws collection or are interested in becoming one, there are 2 key changes to consider.
Firstly, it introduces a breaking change in the `module_utils.botocore`
module, where the `conn_type`
parameter of the `boto3_conn`
method is now mandatory. This change enables a more consistent and reliable configuration of the connection type when using `boto3_conn`
.
Additionally, there are minor updates to improve error handling and consistency. Both the `module_utils.botocore`
and `plugin_utils.botocore`
modules now catch the `BotoCoreError`
exception rather than an incomplete list of its subclasses, enhancing error management during interactions with AWS services. Furthermore, in the `module_utils.modules`
, it replaces the use of `botocore. Session`
with `boto3.Session`
for consistency across the codebase. These updates contribute to improved stability, maintainability and clarity in the collection's code.
Code quality improvement
This release continues the ongoing initiative to enhance the amazon.aws collection by focusing on refactoring key modules like S3, EC2 and RDS to improve code readability, maintainability and performance. This enhancement will ultimately deliver a smoother and more efficient user experience. Additionally, certain variables have been renamed to enable consistent naming across the codebase and added type hinting to all functions, further strengthening code clarity and reliability.
While this release includes updates for the most prominent modules, refactoring is still in progress for several other modules in the collection. These modules are scheduled for future releases to further enhance code quality and the overall user experience.
In this release, we have completed significant updates to keep the RETURN block of plugins up-to-date. Specifically, we introduced documentation updates in the RETURN block for several modules, enabling accuracy and completeness in the output data structure, which is essential for user understanding and effective utilization. Additionally, we enhanced the documentation for plugins by adopting the new Ansible semantic markdown format, which improves readability and clarity, making the documentation more user-friendly.
|
Where to go next
- Check out Red Hat Summit 2025!
- For further reading and information, visit other blogs related to Ansible Automation Platform.
- Check out the YouTube playlist for everything about Ansible Collections.
- Are you new to Ansible automation and want to learn? Check out our getting started guide on developers.redhat.com.
resource
IT 자동화를 통해 하이브리드 클라우드 환경 연결
저자 소개
채널별 검색
오토메이션
기술, 팀, 인프라를 위한 IT 자동화 최신 동향
인공지능
고객이 어디서나 AI 워크로드를 실행할 수 있도록 지원하는 플랫폼 업데이트
오픈 하이브리드 클라우드
하이브리드 클라우드로 더욱 유연한 미래를 구축하는 방법을 알아보세요
보안
환경과 기술 전반에 걸쳐 리스크를 감소하는 방법에 대한 최신 정보
엣지 컴퓨팅
엣지에서의 운영을 단순화하는 플랫폼 업데이트
인프라
세계적으로 인정받은 기업용 Linux 플랫폼에 대한 최신 정보
애플리케이션
복잡한 애플리케이션에 대한 솔루션 더 보기
오리지널 쇼
엔터프라이즈 기술 분야의 제작자와 리더가 전하는 흥미로운 스토리
제품
- Red Hat Enterprise Linux
- Red Hat OpenShift Enterprise
- Red Hat Ansible Automation Platform
- 클라우드 서비스
- 모든 제품 보기
툴
체험, 구매 & 영업
커뮤니케이션
Red Hat 소개
Red Hat은 Linux, 클라우드, 컨테이너, 쿠버네티스 등을 포함한 글로벌 엔터프라이즈 오픈소스 솔루션 공급업체입니다. Red Hat은 코어 데이터센터에서 네트워크 엣지에 이르기까지 다양한 플랫폼과 환경에서 기업의 업무 편의성을 높여 주는 강화된 기능의 솔루션을 제공합니다.