피드 구독

One challenge with virtualization platforms, and particularly cloud environments with self-service catalogs, is that a small number of virtual instances can over-consume resources and starve other resources such as CPU, networking or storage bandwidth.

In Red Hat OpenStack Platform 9, support for creating a QOS (Quality of Service) filter per OpenStack Networking (Neutron) port was added. This allows for the ability to create bandwidth limiting policies per port associated with instances. For example, if there was an instance that had the potential to use a lot of bandwidth, a limiting or QOS rule could be created to only allow for a certain peak of bandwidth to be consumed.

The following steps below walk through the process of creating and applying a bandwidth limiting rule and then validating that it has been applied to the instance. While this is a manual process, one could automate this within the OpenStack Orchestration (Heat) template or with Red Hat CloudForms.

1. Source the overcloudrc file:

[stack@ospvd ~]$ source overcloudrc

 

2. List out the project list to get the tenant id where the qos policy will be created:

[stack@ospvd ~]$ openstack project list
+----------------------------------+---------+
| ID                               | Name    |
+----------------------------------+---------+
| 77f1ccb8b2e94dc4b5cf667f3d451f96 | admin   |
| 7ac4aca69c8643289c2716f1228f2346 | service |
+----------------------------------+---------+

 

3. Create the qos policy and associate it to the tenant:

[stack@ospvd ~]$ neutron qos-policy-create 'bw-limiter' --tenant-id 77f1ccb8b2e94dc4b5cf667f3d451f96
Created a new policy:
+-------------+--------------------------------------+
| Field       | Value                                |
+-------------+--------------------------------------+
| description |                                      |
| id          | 374757cd-e623-42e7-b40b-b7c6979d7860 |
| name        | bw-limiter                           |
| rules       |                                      |
| shared      | False                                |
| tenant_id   | 77f1ccb8b2e94dc4b5cf667f3d451f96     |
+-------------+--------------------------------------+

 

4. Create a rule, in this example a bandwidth rule for the QOS policy defined:

[stack@ospvd ~]$ neutron qos-bandwidth-limit-rule-create bw-limiter --max-kbps 30 --max-burst-kbps 100
Created a new bandwidth_limit_rule:
+----------------+--------------------------------------+
| Field          | Value                                |
+----------------+--------------------------------------+
| id             | 1a142dbc-697a-4f4a-b22c-bc2b5ec42d88 |
| max_burst_kbps | 100                                  |
| max_kbps       | 30                                   |
+----------------+--------------------------------------+

 

5. Display the network listing for the tenant:

[stack@ospvd ~]$ neutron net-list
+--------------------------------------+-----------+-------------------------------------------------------+
| id                                   | name      | subnets                                               |
+--------------------------------------+-----------+-------------------------------------------------------+
| e5281226-75d0-4cf5-85bf-d495e40b6b6e | admin-net | ec683040-ace7-4114-bc1b-07c8fbb528e1 192.168.100.0/24 |
+--------------------------------------+-----------+-------------------------------------------------------+
[stack@ospvd ~]$ neutron port-list|grep e5281226-75d0-4cf5-85bf-d495e40b6b6e
[stack@ospvd ~]$ neutron port-list|grep ec683040-ace7-4114-bc1b-07c8fbb528e1
| 4b4a97d9-2737-4e81-8d22-27e77cceefe1 |      | fa:16:3e:3c:de:3d | {"subnet_id": "ec683040-ace7-4114-bc1b-07c8fbb528e1", "ip_address": "192.168.100.3"} |
| 4d64fb0e-7d3f-4e82-91d2-cc05aabf128e |      | fa:16:3e:ec:e1:d8 | {"subnet_id": "ec683040-ace7-4114-bc1b-07c8fbb528e1", "ip_address": "192.168.100.1"} |
| ed798c3b-05f9-49aa-9466-5f308b4fdaef |      | fa:16:3e:13:71:2b | {"subnet_id": "ec683040-ace7-4114-bc1b-07c8fbb528e1", "ip_address": "192.168.100.2"} |

 

6. Associate the QOS policy to the neutron port of choice to apply the policy to the port:

[stack@ospvd ~]$ neutron port-update 4b4a97d9-2737-4e81-8d22-27e77cceefe1 --qos-policy bw-limiter
Updated port: 4b4a97d9-2737-4e81-8d22-27e77cceefe1

[stack@ospvd ~]$ nova show 69f332be-4d74-4957-97ab-3468caae045b
+--------------------------------------+----------------------------------------------------------+
| Property                             | Value                                                    |
+--------------------------------------+----------------------------------------------------------+
| OS-DCF:diskConfig                    | AUTO                                                     |
| OS-EXT-AZ:availability_zone          | nova                                                     |
| OS-EXT-SRV-ATTR:host                 | overcloud-compute-1.localdomain                          |
| OS-EXT-SRV-ATTR:hostname             | test1                                                    |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | overcloud-compute-1.localdomain                          |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000001                                        |
| OS-EXT-SRV-ATTR:kernel_id            |                                                          |
| OS-EXT-SRV-ATTR:launch_index         | 0                                                        |
| OS-EXT-SRV-ATTR:ramdisk_id           |                                                          |
| OS-EXT-SRV-ATTR:reservation_id       | r-4sfdhoyx                                               |
| OS-EXT-SRV-ATTR:root_device_name     | /dev/vda                                                 |
| OS-EXT-SRV-ATTR:user_data            | -                                                        |
| OS-EXT-STS:power_state               | 1                                                        |
| OS-EXT-STS:task_state                | -                                                        |
| OS-EXT-STS:vm_state                  | active                                                   |
| OS-SRV-USG:launched_at               | 2016-10-05T12:50:58.000000                               |
| OS-SRV-USG:terminated_at             | -                                                        |
| accessIPv4                           |                                                          |
| accessIPv6                           |                                                          |
| admin-net network                    | 192.168.100.3                                            |
| config_drive                         |                                                          |
| created                              | 2016-10-05T12:50:48Z                                     |
| description                          | test1                                                    |
| flavor                               | m1.tiny (1)                                              |
| hostId                               | 8e1a39ec95c37b355115e5a51fa428f8c6f542bd17db632fe4938595 |
| host_status                          | UP                                                       |
| id                                   | 69f332be-4d74-4957-97ab-3468caae045b                     |
| image                                | cirros (d7e87a71-3e33-4565-b391-f40a43ba3249)            |
| key_name                             | -                                                        |
| locked                               | False                                                    |
| metadata                             | {}                                                       |
| name                                 | test1                                                    |
| os-extended-volumes:volumes_attached | []                                                       |
| progress                             | 0                                                        |
| security_groups                      | default                                                  |
| status                               | ACTIVE                                                   |
| tenant_id                            | 77f1ccb8b2e94dc4b5cf667f3d451f96                         |
| updated                              | 2016-10-05T12:50:58Z                                     |
| user_id                              | fc670d1ecb074bb5af58cacdf478a38b                         |
+--------------------------------------+----------------------------------------------------------+

 

7. Validate the rule has been applied successfully to the instance port:

[root@overcloud-compute-1 qemu]# virsh dumpxml instance-00000001 | sed -n '/<interface.*/,/<\/interface.*/p'
   <interface type='bridge'>
     <mac address='fa:16:3e:3c:de:3d'/>
     <source bridge='qbr4b4a97d9-27'/>
     <target dev='tap4b4a97d9-27'/>
     <model type='virtio'/>
     <driver name='qemu'/>
     <alias name='net0'/>
     <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
   </interface>

[root@overcloud-compute-1 heat-admin]# ovs-vsctl list interface qvo4b4a97d9-27 | grep ingress
ingress_policing_burst: 100
ingress_policing_rate: 30

 

Benjamin Schmaus is a Red Hat Cloud TAM in the NA Central region. He has been involved with Linux since 1998 and has supported business environments in a variety of industries: retail, defense, software, financial, higher education and K-12. Most recently he has been focused on enabling our customers in deploying, operating and supporting Red Hat OpenStack Platform and Red Hat Ceph Storage.

Innovation is only possible because of the people behind it. Join us at Red Hat Summit, May 2-4, to hear from TAMs and other Red Hat experts in person! Register now for only US$1,000 using code CEE17.

A Red Hat Technical Account Manager (TAM) is a specialized product expert who works collaboratively with IT organizations to strategically plan for successful deployments and help realize optimal performance and growth. The TAM is part of Red Hat’s world class Customer Experience and Engagement organization and provides proactive advice and guidance to help you identify and address potential problems before they occur. Should a problem arise, your TAM will own the issue and engage the best resources to resolve it as quickly as possible with minimal disruption to your business.


저자 소개

Benjamin Schmaus is a Red Hat Principle Product Manager for Edge Solutions. He has been involved with Linux since 1998 and has supported business environments in a variety of industries: retail, defense, software development, pharmacy, financial, higher education and K-12. His experience in those industries along with various positions within Red Hat have enabled him to have a broad and deep understanding of customer challenges. Most recently, he has been focused on enabling our customers at the edge by driving the Red Hat portfolio to deliver edge solutions that meet their ever diverse needs as they journey through modernization.

Read full bio
UI_Icon-Red_Hat-Close-A-Black-RGB

채널별 검색

automation icon

오토메이션

기술, 팀, 인프라를 위한 IT 자동화 최신 동향

AI icon

인공지능

고객이 어디서나 AI 워크로드를 실행할 수 있도록 지원하는 플랫폼 업데이트

open hybrid cloud icon

오픈 하이브리드 클라우드

하이브리드 클라우드로 더욱 유연한 미래를 구축하는 방법을 알아보세요

security icon

보안

환경과 기술 전반에 걸쳐 리스크를 감소하는 방법에 대한 최신 정보

edge icon

엣지 컴퓨팅

엣지에서의 운영을 단순화하는 플랫폼 업데이트

Infrastructure icon

인프라

세계적으로 인정받은 기업용 Linux 플랫폼에 대한 최신 정보

application development icon

애플리케이션

복잡한 애플리케이션에 대한 솔루션 더 보기

Original series icon

오리지널 쇼

엔터프라이즈 기술 분야의 제작자와 리더가 전하는 흥미로운 스토리