Subscribe to the feed

Greetings, automation enthusiasts! As organizations increasingly turn to cloud solutions for enhanced security, scalability, and control, the demand for robust automation tools becomes more critical than ever.

In this blog post, we'll delve into Ansible Certified Content Collections and the new features and enhancements in the newest release of the vmware.vmware_rest collection for managing VMware environments.

Whether you're orchestrating complex workflows or ensuring seamless scalability, Ansible's vmware.vmware_rest collection is here to simplify the challenges of private cloud management. Let's get started!

Forward-looking Changes

Much of our work in the 3.0.0 release has been focused on the following areas:

  • Support for VMware vSphere 7.0U3 API
  • New features and capabilities
  • Clarifying support policies for ansible-core and Python
  • Improving the overall quality of the collection code

New API

Embarking on a cutting-edge journey to elevate virtualization orchestration, the vmware.vmware_rest collection has undergone a regeneration, now seamlessly synchronized with the VMware vSphere 7.0.3 API, also known as 7.0U3. This pivotal update ushers in a wave of new features and improvements, harnessing the enhanced capabilities of the updated API.

Alongside these new features, the 7.0U3 API brings to the collection refined documentation, so that users can navigate its enriched functionality with better clarity and precision.

New Features Highlights

The 7.0U3 API introduces new features and example use case scenarios are shown below.

Use Case #1: High-Performance Database Server

Suppose you are configuring a virtual machine (VM) in a VMware vSphere environment to serve as a high-performance database server for a critical application. The application has stringent requirements for low-latency, and high-speed data access.

Requirements:

  • The database server requires fast storage to handle a large number of transactions and queries per second.
  • Low latency access to data is critical for application responsiveness.

Solution: Non-Volatile Memory Express (NVMe) Storage Configuration.

Non-Volatile Memory Express (NVMe) adapters are used to connect NVMe storage devices (such as NVMe - Solid State Drives - SSDs) to a computer or VM. NVMe is a protocol designed to provide efficient access to storage devices via the Peripheral Component Interconnect Express (PCIe) interface. Here are some scenarios where configuring NVMe adapters could be advantageous:

  • High-Performance Storage: NVMe storage devices give you significantly higher performance than conventional storage technologies such as SATA or SAS. If your application or workload needs high-speed data access, the usage of NVMe adapters for NVMe SSDs can be beneficial.
  • I/O-Intensive Workloads: Workloads involving heavy I/O operations, such as databases or data analysis, can benefit from the low latency and high throughput offered by NVMe storage.
  • Data-Centric Applications: Applications that rely heavily on data access speed, such as real-time analytics or high-performance computing (HPC) applications, can benefit from using NVMe storage.
  • Improving Overall System Performance: NVMe storage devices are known for their low latency and high bandwidth, making them suitable for improving overall system performance. By setting up NVMe adapters, you enable efficient communication between the CPU and NVMe storage, and reduce data access times.
  • Storage Tiering: In storage environments with tiering strategies, NVMe storage can be used as a high-performance tier for frequently accessed data, while other slower storage tiers handle less frequently accessed data.
  • Applications Requiring Low Latency: NVMe storage devices are designed to provide low-latency access to data.

Let’s see how to do that in Ansible Automation Platform using the new vmware.vmware_rest.vcenter_vm module’ option nvme_adapters.

  - name: Deploy a Virtual Machine with NVMe adapters
    vmware.vmware_rest.vcenter_vm:
      placement:
        cluster: "{{ lookup('vmware.vmware_rest.cluster_moid', '/my_dc/host/my_cluster') }}"
        datastore: "{{ lookup('vmware.vmware_rest.datastore_moid', '/my_dc/datastore/local') }}"
        folder: "{{ lookup('vmware.vmware_rest.folder_moid', '/my_dc/vm') }}"
        resource_pool: "{{ lookup('vmware.vmware_rest.resource_pool_moid', '/my_dc/host/my_cluster/Resources') }}"
      name: "my_vm"
      guest_OS: RHEL_9_64
      hardware_version: VMX_19
      memory:
        hot_add_enabled: true
        size_MiB: 2048
      nvme_adapters:
        - bus: 0
          pci_slot_number: 0
      disks:
      - type: SATA
        backing:
          type: VMDK_FILE
          vmdk_file: "[local] my_vm/{{ disk_name }}.vmdk"
      - type: SATA
        new_vmdk:
          name: second_disk
          capacity: 32000000000
      cdroms:
      - type: SATA
        sata:
          bus: 0
          unit: 2
      nics:
      - backing:
          type: STANDARD_PORTGROUP
          network: "{{ lookup('vmware.vmware_rest.network_moid', '/my_dc/network/VM Network') }}"

The parameters bus and pci_slot_number represent specific details about the location or placement of the NVMe adapter within the virtual machine's hardware configuration.

The bus parameter typically represents the number of the bus to which the NVMe card is connected. In a virtualized environment such as VMware, it is often set to 0, indicating the default or primary bus.

The pci_slot_number parameter refers to the PCI slot number in which the NVMe adapter is inserted. In a Peripheral Component Interconnect (PCI) architecture, each device is assigned a unique slot number. Setting pci_slot_number to 0 often indicates that the NVMe adapter is inserted into the first available slot.

It is important to note that although NVMe offers significant performance advantages, the decision to use NVMe adapters must be guided by the specific use case and workload requirements. In addition, not all workloads may require the high performance offered by NVMe, and the cost of NVMe storage devices must be considered in relation to overall system requirements and budget.

Use Case #2: Secure Content Distribution in a Multi-Tenant Environment

Suppose you are managing a VMware infrastructure where multiple tenants share the same vCenter Server. Each tenant has its own set of VMs and resources and you want to set up a Content Library to distribute VM templates.

Requirements:

  • Each tenant can access and deploy content only from their dedicated Content Library, thus preventing unauthorized access or modifications by other tenants.

Solution:

  • You need to create separate Content Libraries for each tenant to keep their content isolated and assign unique security policies for each Content Library to restrict access to Content Libraries that don't belong to a specific tenant.

In VMware vSphere, there are two main types of content libraries: Local Content Library and Subscribed Content Library. Let's explore the key differences between these two types:

 

Content Local Library

Content Subscribed Library

Location

Is hosted on a specific vCenter Server or ESXi host.

Is associated with a single vCenter Server, but the content itself is hosted externally at a published URL.

Content Source  (VM templates, ISOs, etc.)

Is stored locally within the vCenter Server or ESXi host that hosts the content library.

Is stored at an external source, typically accessible over HTTP or HTTPS. The vCenter Server subscribes to this external source to synchronize and maintain the content locally.

Use Case

Are suitable for environments where the content is needed only within a specific vCenter Server instance or ESXi host. They are not automatically synchronized with other vCenter Servers.

Are useful in environments with multiple vCenter Servers or distributed infrastructures. They allow you to centrally manage and distribute content across different vCenter Servers, ensuring consistency and ease of management.

In summary, the choice between a Local Content Library and a Subscribed Content Library depends on your specific requirements. Local Content Libraries are suitable for standalone environments, while Subscribed Content Libraries are more versatile in distributed or multi-vCenter Server environments.

As the new 7.0U3 API introduces two new options secury_policy_id and unset_security_policy_id for both modules vmware.vmware_rest.content_locallibrary and vmware.vmware_rest.content_subscribedlibrary, for example,  let’s see how to secure your content local library:

  - name: Create Local Content Library with a Security Policy
    vmware.vmware_rest.content_locallibrary:
      state: present
      name: "contentlocal_library_1"
      description: "Content local library description"
      storage_backings:
      - datastore_id: "{{ lookup('vmware.vmware_rest.datastore_moid', '/my_dc/datastore/local) }}"
        type: DATASTORE
      security_policy_id: "{{ security_policy_id }}"

security_policy_id represents the security policy applied to this content library. Setting the field will make the content library secure. This field is ignored in update operation if unset_security_policy_id is set to true.

Benefits:

  • Isolation: Each tenant has its own Content Library, providing isolation and independence in managing their content.
  • Security: By setting up security policies, only authorized users from a specific tenant can access and modify the content within their designated library.
  • Compliance: This setup aligns with compliance requirements to maintain t data separation and access controls.
  • Efficiency: Centralized content management is still maintained, but with strict access controls, making the distribution of standardized content more efficient.

This use case highlights the importance of security policies for a more secure and controlled environment, especially in multi-tenant setups where multiple entities share the same infrastructure. It helps prevent accidental or intentional interference between tenants, contributing to a more secure and compliant virtualization environment.

New ansible-core and Python Support Policies

With the release of ansible-core 2.14, a notable shift in supportability policies has been introduced, particularly impacting the vmware.vmware_rest collection. The latest update brings about a significant change by discontinuing support for Ansible versions equal to or older than 2.13. This strategic move not only underscores a commitment to advancing the capabilities of ansible-core but also signals a forward-looking approach to harness the full potential of the vmware.vmware_rest collection. In this evolution of supportability, users are encouraged to upgrade to ansible-core 2.14 to ensure seamless compatibility and take advantage of the latest enhancements and features. This revamped supportability policy signifies a proactive approach by the Ansible community to provide users with a robust and dependable framework for orchestrating and managing VMware environments.

Furthermore, in tandem with the revamped supportability policy, ansible-core 2.14 brings a pivotal update by extending compatibility to Python 3.9 and subsequent versions. This move aligns with the industry's ongoing embrace of the latest Python releases, enabling users to use the advanced features and optimizations in Python 3.9 and beyond. As a result, upgrading to ansible-core 2.14 not only guarantees compatibility with the vmware.vmware_rest collection, but also opens the door to harnessing the full power of the latest Python environments for enhanced performance and efficiency.

Changes for Developers

Code quality and CI improvement

In our pursuit of elevating code quality, our primary objective is to foster the creation of software that is not only more reliable and efficient, but also easier to maintain. This improvement initiative is geared towards enhancing the overall user experience, benefiting both developers and end users alike.

Transition to GitHub Actions

As part of this endeavor, we have made the decision to transition some of our jobs from Zuul to GitHub Actions for our Continuous Integration (CI) processes. This shift not only streamlines our CI pipeline but also takes advantage of the seamless integration offered by GitHub. By doing so, we aim to enhance scalability, promote collaboration, optimize workflow management, and increase overall development process efficiency.

Linters

To further fortify our commitment to code quality, we have included  isort and ansible-lint. These tools play a pivotal role in maintaining code consistency and adhering to established coding standards.

Ongoing Code Quality Enhancement

Recognizing that improving code quality is a continuous effort requiring persistent attention, this work is an ongoing initiative that will be progressively integrated into future releases. Our commitment to refining code quality is a testament to our dedication to delivering software that stands out for reliability, efficiency, and maintainability.


執筆者紹介

Alina Buzachis, PhD, is a Senior Software Engineer at Red Hat Ansible, where she works primarily on cloud technologies. Alina received her PhD in Distributed Systems in 2021, focusing on advanced microservice orchestration techniques in the Cloud-to-Thing continuum. In her spare time, Alina enjoys traveling, hiking, and cooking.

Read full bio

チャンネル別に見る

automation icon

自動化

テクノロジー、チームおよび環境に関する IT 自動化の最新情報

AI icon

AI (人工知能)

お客様が AI ワークロードをどこでも自由に実行することを可能にするプラットフォームについてのアップデート

open hybrid cloud icon

オープン・ハイブリッドクラウド

ハイブリッドクラウドで柔軟に未来を築く方法をご確認ください。

security icon

セキュリティ

環境やテクノロジー全体に及ぶリスクを軽減する方法に関する最新情報

edge icon

エッジコンピューティング

エッジでの運用を単純化するプラットフォームのアップデート

Infrastructure icon

インフラストラクチャ

世界有数のエンタープライズ向け Linux プラットフォームの最新情報

application development icon

アプリケーション

アプリケーションの最も困難な課題に対する Red Hat ソリューションの詳細

Original series icon

オリジナル番組

エンタープライズ向けテクノロジーのメーカーやリーダーによるストーリー