Red Hat 블로그
Blog menu
Libvirt/Kernel-based Virtual Machine (KVM) Driver Enhancements
- It is now possible to add a Virtio RNG device to compute instances to provide increased entropy. Virtio RNG is a paravirtual random number generation device. It allows the compute node to provide entropy to the compute instances in order to fill their entropy pool. The default entropy device used on the host is /dev/random, however, use of a hardware RNG device physically attached to the host is also possible. The use of the Virtio RNG device is enabled using the hw_rng property in the metadata of the image used to build the instance.
- Watchdog support has been added, allowing the triggering of instance lifecycle events based on a crash or kernel panic detected within the instance. The watchdog device used is a i6300esb. It is enabled by setting the hw_watchdog_action property in the image properties or flavor extra specifications to a value other than disabled. Supported hw_watchdog_action property values, which denote the action for the watchdog device to take in the event of detecting an instance failure, are poweroff, reset, pause, and none.
- It is now possible to configure instances to use video driver other than the default. This allows the specification of different video driver models, different amounts of video RAM, and different numbers of video heads. These values are configured by setting the hw_video_model, and hw_video_vram properties respectively in the image metadata. The number of heads is configured from within the guest operating system. Currently supported video driver models are vga, cirrus, vmvga, xen and qxl.
- Modified kernel arguments can now be provided to booting compute instances. The kernel arguments are retrieved from the os_command_line key in the image metadata as stored in the OpenStack Image Service (Glance), if a value for the key was provided. If no value is provided then the default kernel arguments continue to be used.
- VirtIO SCSI (virtio-scsi) can now be used instead of VirtIO Block (virtio-blk) to provide block device access for instances. Virtio SCSI is a para-virtualized SCSI controller device designed as a future successor to VirtIO Block and aiming to provide improved scalability and performance. VirtIO SCSI is enabled for a guest instance by setting hw_disk_bus_model to virtio-scsi in the image properties.
- Changes have been made to the expected format of the /etc/nova/nova.conf configuration file with a view to ensuring that all configuration groups in the file use descriptive names. A number of driver specific flags, including those for the Libvirt driver, have also been moved to their own option groups.
Compute API Enhancements
- API facilities have been added for defining, listing, and retrieving the details of instance groups. Instance groups provide a facility for grouping related virtual machine instances at boot time and applying policies to determine how they must be scheduled in relation to other members of the group. Currently supported policies are affinity, which indicates all instances in the group should be scheduled to the same host, and anti-affinity, which indicates all instances in the group should be scheduled on separate hosts. Retrieving the details of an instance group using the updated API also returns the list of group members.
- The Compute API now exposes a mechanism for permanently removing decommissioned compute nodes. Previously these would continue to be listed even where the compute service had been disabled and the system re-provisioned. This functionality is provided by the ExtendedServicesDelete API extension.
- The Compute API now exposes the hypervisor IP address, allowing it to be retrieved by administrators using the "nova hypervisor-show" command.
- The Compute API currently supports both XML and JSON formats. Support for the XML format has now been marked as deprecated and will be retired in a future release.
Notifications
- Notifications are now generated when a Compute host is enabled, disabled, powered on, shut down, rebooted, put into maintenance mode and taken out of maintenance mode.
- Notifications are now generated upon the creation and deletion of keypairs.
Scheduler
- Modifications have been made to the scheduler to add an extensible framework allowing for it to make decisions based on resource utilization. In coming releases expect to see more development in this space, particular as it is extended to handle specific resource classes.
- An initial experimental implementation of a caching scheduler driver was added. The caching scheduler uses the existing facilities for applying scheduler filters and weights but caches the list of available hosts. When a user request is passed to the caching scheduler it attempts to perform scheduling based on the list of cached hosts, with a view to improving scheduler performance.
- A new scheduler filter, AggregateImagePropertiesIsolation, has been introduced. The new filter schedules instances to hosts based on matching namespaced image properties with host aggregate properties. Hosts that do not belong to any host aggregate remain valid scheduling targets for instances based on all images. The new Compute service configuration keys aggregate_image_properties_isolation_namespace and aggregate_image_properties_isolation_separator are used to determine which image properties are examined by the filter.
Testing
During the Icehouse release cycle work has continued to facilitate third party testing of hypervisor drivers that live in the OpenStack Compute source tree. This allows third parties to provide continuous integration (CI) infrastructure to run regression tests against each proposed OpenStack Compute patch and record the results so that they can be referred to as the code is reviewed. This ensures not only test coverage for these drivers but valuable additional test coverage of the shared componentry provided by the OpenStack Compute project itself.
Upgrades
The Compute services now allow for a level of rolling upgrade, whereby control services can be upgraded to Icehouse while they continue to interact with compute services running code from the Havana release. This allows for a more gradual approach to upgrading an OpenStack cloud, or logical designated subset thereof, than has typically been possible in the past.
Work on stabilizing Icehouse will go on for some time now before the community gathers again in May for OpenStack Summit 2014 in Atlanta to define the design vision for the next six month release cycle. If you want to help test out some of the above features, why not get involved in the upcoming RDO test day using freshly baked packages based on the third Icehouse release milestone?