[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [libvirt] [PATCH v2 10/11] qemu: auto-add bridges and allow using them



On 04/19/2013 01:02 AM, Eric Blake wrote:
> On 04/17/2013 01:00 PM, Ján Tomko wrote:
>> @@ -1321,7 +1363,39 @@ qemuDomainAssignPCIAddresses(virDomainDefPtr def,
>>      qemuDomainObjPrivatePtr priv = NULL;
>>  
>>      if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_DEVICE)) {
>> -        if (!(addrs = qemuDomainPCIAddressSetCreate(def)))
>> +        int nbuses = 1;
>> +        int i;
>> +        int rv;
>> +
>> +        for (i = 0; i < def->ncontrollers; i++) {
>> +            if (def->controllers[i]->type == VIR_DOMAIN_CONTROLLER_TYPE_PCI &&
>> +                def->controllers[i]->model == VIR_DOMAIN_CONTROLLER_MODEL_PCI_BRIDGE)
>> +                nbuses++;
>> +        }
>> +        if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_DEVICE_PCI_BRIDGE)) {
>> +            virDomainDeviceInfo info;
>> +            /* 1st pass to figure out how many PCI bridges we need */
>> +            if (!(addrs = qemuDomainPCIAddressSetCreate(def, nbuses, true)))
>> +                goto cleanup;
>> +            if (nbuses && qemuAssignDevicePCISlots(def, qemuCaps, addrs) < 0)
>> +                goto cleanup;
>> +            /* Reserve 1 extra slot for a (potential) bridge */
>> +            if (qemuDomainPCIAddressSetNextAddr(addrs, &info) < 0)
>> +                goto cleanup;
>> +
>> +            for (i = 1; i < addrs->nbuses; i++) {
>> +                if ((rv = virDomainDefMaybeAddController(
>> +                        def, VIR_DOMAIN_CONTROLLER_TYPE_PCI,
>> +                        i, VIR_DOMAIN_CONTROLLER_MODEL_PCI_BRIDGE)) < 0)
>> +                    goto cleanup;
>> +                /* If we added a new bridge, we will need one more address */
>> +                if (rv > 0 && qemuDomainPCIAddressSetNextAddr(addrs, &info) < 0)
>> +                    goto cleanup;
>> +            }
>> +            qemuDomainPCIAddressSetFree(addrs);
>> +            addrs = NULL;
>> +        }
> 
> Do we need an else that fails if we have no pci bridge support, but more
> than one pci controller is present?
>

I'd rather disallow multiple PCI controllers with the same index (and
check if pci-root has index 0).

Specifying two bridges with indexes 1 and 3 also doesn't mean that buses
0-2 are available (which is what my code above would think if bus 3
would be unused).

And it would also be nice to add them to qemu command line without
referencing bridges that don't exist yet by requiring that bus < index
in the bridge address and inserting them in order (but I'm worried that
doing so in virDomainDefMaybeAddController might break other things).

>> +        if (!(addrs = qemuDomainPCIAddressSetCreate(def, addrs->nbuses, false)))
>>              goto cleanup;
>>  
>>          if (qemuAssignDevicePCISlots(def, qemuCaps, addrs) < 0)


>> @@ -1503,30 +1593,44 @@ qemuDomainPCIAddressGetNextSlot(qemuDomainPCIAddressSetPtr addrs,
>>                                  virDevicePCIAddressPtr next_addr)
>>  {
>>      virDevicePCIAddress tmp_addr = addrs->lastaddr;
>> -    int i;
>> +    int i,j;
> 
> Space after comma.
> 
>>      char *addr;
>>  
>>      tmp_addr.slot++;
>> -    for (i = 0; i < QEMU_PCI_ADDRESS_SLOT_LAST; i++, tmp_addr.slot++) {
>> -        if (QEMU_PCI_ADDRESS_SLOT_LAST <= tmp_addr.slot) {
>> -            tmp_addr.slot = 0;
>> -        }
>> +    for (j = 0; j < addrs->nbuses; j++) {
>> +        for (i = 0; i < QEMU_PCI_ADDRESS_SLOT_LAST; i++, tmp_addr.slot++) {
>> +            if (QEMU_PCI_ADDRESS_SLOT_LAST <= tmp_addr.slot) {
>> +                /* slot 0 is unusable */
>> +                tmp_addr.slot = 1;
>> +                i++;
>> +            }
> 
> I found this use of 'i' a bit confusing - any time you change the
> iteration conditions inside the loop body, it becomes trickier to follow
> the logic.  I think you got it right, in that basically you want to
> iterate over 31 slots, instead of 32 (by skipping slot 0), but where the
> slot you probe starts from addrs->lastaddr and wraps around rather than
> starting at 1.
> 
> Would it be any clearer if the inner loop iterated over i=1; i <
> SLOT_LAST, so that the only increment of i is in the loop header?

Yes, that would make it slightly less disgusting. Or maybe i=0; i <
SLOT_LAST-1?

> 
> Also, I think I see why keeping this cache logic is essential for bus 0
> for back-compat reasons (we want to avoid hot-plugging a new device into
> a previously-used but now unplugged slot, at least until we have run out
> of unique slots to plug into, in order to minimize guest confusion about
> different devices trying to reuse a slot).  But do we really need to
> extend it to bus 1?  That is, if one call fills the last slot of bus 0
> and sets addrs->lastaddr to 5, do we really want the next call to start
> at slot 5 of bus 1, or even though nothing has ever probed slots 1-4 of
> bus 1?  I'm worried that your caching of last-used slot is not quite
> right; in fact, I wonder if you need to cache both last-used bus and
> last-used slot on that bus, or even last-used slot on all buses, rather
> than just a single last-used slot without relation to bus.
> 

It does cache the domain, bus and slot.

>>  
>> -        if (!(addr = qemuPCIAddressAsString(&tmp_addr)))
>> -            return -1;
>> +            if (!(addr = qemuPCIAddressAsString(&tmp_addr)))
>> +                return -1;
>>  
>> -        if (qemuDomainPCIAddressSlotInUse(addrs, &tmp_addr)) {
>> -            VIR_DEBUG("PCI addr %s already in use", addr);
>> -            VIR_FREE(addr);
>> -            continue;
>> -        }
>> +            if (qemuDomainPCIAddressSlotInUse(addrs, &tmp_addr)) {
>> +                VIR_DEBUG("PCI addr %s already in use", addr);
>> +                VIR_FREE(addr);
>> +                continue;
>> +            }
>>  
>> -        VIR_DEBUG("Found free PCI addr %s", addr);
>> -        VIR_FREE(addr);
>> +            VIR_DEBUG("Found free PCI addr %s", addr);
>> +            VIR_FREE(addr);
>>  
>> -        addrs->lastaddr = tmp_addr;
>> -        *next_addr = tmp_addr;
>> -        return 0;
>> +            addrs->lastaddr = tmp_addr;
>> +            *next_addr = tmp_addr;
>> +            return 0;
> 
> Again, thinking about last-used slot, it seems like if this filled up
> all available slots on the bus, wouldn't it be better to set
> addrs->lastaddr to 1 (to start the next bus correctly)?  Does the outer
> loop (on 'j') need to start iterating from the last-used bus instead of
> starting at bus 0 and finding all 31 slots full?

j is just there to make sure we stop if all the buses are full.
tmp_addr.bus will be set to whatever bus was in lastaddr and yes,
advancing to the next bus and starting from slot 1 would make more
sense. And it might even make the loop look like it's from this planet.

> 
> Or is caching the last-used slot something that we can completely avoid
> in the case of a multibus support, and only worry about it for older qemu?
> 

Without it, we'd need to go over all the buses every time a new address
is needed. But I'm not sure if we want to cache reservation of slot 7
when slot 6 is empty (which could only happen with hotplug, since
addresses from the XML config are not cached and the rest is assigned
sequentially).

(Btw: I accidentally removed the caching from ReserveSlot in 5/11 which
is where explicitly specified addresses of hotplugged devices are cached).

>> +        }
>> +        tmp_addr.bus++;
>> +        if (addrs->nbuses <= tmp_addr.bus) {
>> +            if (addrs->dryRun) {
>> +                if (qemuPCIAddressSetGrow(addrs, &tmp_addr) < 0)
>> +                    return -1;
>> +            } else {
>> +                tmp_addr.bus = 0;
>> +            }
>> +            tmp_addr.slot = 1;
>> +        }
>>      }
>>  
>>      virReportError(VIR_ERR_INTERNAL_ERROR,



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]