[rhos-list] openstack networking with openvswitch

Prashanth Prahalad prashanth.prahal at gmail.com
Thu May 2 21:31:03 UTC 2013


Hi Folks,

I' m in the process of setting up a openvswitch deployment and following
this guide (
https://access.redhat.com/site/documentation/en-US/Red_Hat_OpenStack_Preview/2/pdf/Release_Notes/Red_Hat_OpenStack_Preview-2-Release_Notes-en-US.pdf
)

My plan was to create network segmented by different vlan'ids using OVS.

This is my configuration :

----------------------------------------------------------------------------------------------
[compute node]                        [nova-compute and other nova
utilities]
                                                    [quantum-server]
                                                    [quantum-dhcp-agent]
----------------------------------------------------------------------------------------------
10.9.10.43                                eth5
         |                                            |
    [mgmt]                                  [data]
         |                                            |
10.9.10.129                              eth1
----------------------------------------------------------------------------------------------
[network node]                         [quantum-l3-agent]
----------------------------------------------------------------------------------------------

I've the configuration files pasted at the end of this email for more
clarity, but here's what I was expecting to accomplish.

Step 1 : Create a network
*quantum net-create opn1 --provider:network-type vlan
--provider:physical-network physnet5 --provider:segmentation-id 500*
Created a new network:
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| id                        | 5d47f63f-c804-4d23-8aaa-86373bc96b3b |
| name                      | opn1                                 |
| provider:network_type     | vlan                                 |
| provider:physical_network | physnet5                             |
| provider:segmentation_id  | 500                                  |
| router:external           | False                                |
| shared                    | False                                |
| status                    | ACTIVE                               |
| subnets                   |                                      |
| tenant_id                 | b26737806380406dbed3d273308a6a2f     |
+---------------------------+--------------------------------------+

Step 2 :
Create a subnet
*quantum subnet-create opn1 65.1.1.0/24*
Created a new subnet:
+------------------+--------------------------------------------+
| Field            | Value                                      |
+------------------+--------------------------------------------+
| allocation_pools | {"start": "65.1.1.2", "end": "65.1.1.254"} |
| cidr             | 65.1.1.0/24                                |
| dns_nameservers  |                                            |
| enable_dhcp      | True                                       |
| gateway_ip       | 65.1.1.1                                   |
| host_routes      |                                            |
| id               | 5df16c75-31eb-4332-b76c-c0986525e2de       |
| ip_version       | 4                                          |
| name             |                                            |
| network_id       | 5d47f63f-c804-4d23-8aaa-86373bc96b3b       |
                                                       C
| tenant_id        | b26737806380406dbed3d273308a6a2f           |


Step 3: boot an image and attach it to this network
*nova boot --image cirros --flavor m1.tiny --nic
net-id=5d47f63f-c804-4d23-8aaa-86373bc96b3b --key-name test my_1_server*

At this point, the vm comes up with an address on the subnet and is
accessible locally from within the compute node :

+--------------------------------------+-------------+--------+---------------+
| ID                                   | Name        | Status | Networks
   |
+--------------------------------------+-------------+--------+---------------+
| 72224a7c-273e-4dea-922b-09c38bd77538 | my_1_server | ACTIVE |
opn1=65.1.1.3 |
+--------------------------------------+-------------+--------+---------------+

But, I was expecting to see a vnic on eth5 for the vlan 500 which we
created in Step1 - that didn't seem to have happened from the ovs-vsctl
show output.

f83d2ba4-ff86-4e2c-8f00-0c572e30533f
    Bridge "br-eth5"
        Port "br-eth5"
            Interface "br-eth5"
                type: internal
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
    Bridge br-int
        Port "tapa12a740a-c2"
            tag: 5
            Interface "tapa12a740a-c2"
                type: internal
        Port "qvoccf5b741-60"
            tag: 4095
            Interface "qvoccf5b741-60"
        Port "qvodc963159-16"
            tag: 4095
            Interface "qvodc963159-16"
        Port "qvod7544ba5-c1"
            tag: 5
            Interface "qvod7544ba5-c1"
        Port "qvo4894fa5d-40"
            tag: 4095
            Interface "qvo4894fa5d-40"
        Port br-int
            Interface br-int
                type: internal
    ovs_version: "1.9.0"


 My question is how do we expect the VMs on this compute node to talk with
other VMs on a different compute node if the physical interface is not
plugged into the br-int. Am I missing something here ?

Regards,
Prashanth


Below is a snapshot of the different configuration files :
*
*
*[Compute Node]*
*quantum.conf*

[DEFAULT]
rpc_backend = quantum.openstack.common.rpc.impl_qpid
qpid_hostname = 10.9.10.43
core_plugin =
quantum.plugins.openvswitch.ovs_quantum_plugin.OVSQuantumPluginV2
auth_strategy = keystone
verbose = True
debug = True
bind_port = 9696
[keystone_authtoken]
admin_tenant_name = openstack_network
admin_user = openstack_network
admin_password = test123

*dhcp_agent.ini*
[DEFAULT]
auth_url = http://localhost:35357/v2.0/
admin_tenant_name = openstack_network
admin_user = openstack_network
admin_password = test123
interface_driver = quantum.agent.linux.interface.OVSInterfaceDriver
use_namespaces = False
dhcp_driver = quantum.agent.linux.dhcp.Dnsmasq
admin_username = quantum

*/etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini*
[DATABASE]
sql_connection = mysql://quantum:quantum@r5-20/ovs_quantum
[OVS]
tenant_network_type = vlan
network_vlan_ranges = physnet5:100:1000
bridge_mapping = physnet5:br-eth5

*nova.conf*
[DEFAULT]
<….>
network_api_class = nova.network.quantumv2.api.API
quantum_admin_username = openstack_network
quantum_admin_password = test123
quantum_admin_auth_url = http://127.0.0.1:35357/v2.0/
quantum_auth_strategy = keystone
quantum_admin_tenant_name = openstack_network
quantum_url = http://10.9.10.43:9696/
libvirt_vif_driver = nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
libvirt_use_virtio_for_bridges=true


On the network node, this is the l3 configuration :
*l3_agent.ini*
[DEFAULT]
auth_url = http://10.9.10.43:35357/v2.0/
admin_user = openstack_network
admin_password = test123
admin_tenant_name = openstack_network
auth_strategy = keystone
interface_driver = quantum.agent.linux.interface.OVSInterfaceDriver
use_namespaces = False
verbose = True
debug = False
interface_driver = quantum.agent.linux.interface.OVSInterfaceDriver
auth_region = regionOne
router_id = 0496b7f6-1b27-487f-8a95-d7430302b080
external_network_bridge = br-ex
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/rhos-list/attachments/20130502/375bbeed/attachment.htm>


More information about the rhos-list mailing list