[libvirt] [PATCH 4/4] test_driver: Implement virNodeAllocPages

Michal Privoznik mprivozn at redhat.com
Thu Oct 13 02:57:07 UTC 2016


On 13.10.2016 05:38, John Ferlan wrote:
> 
> 
> On 10/05/2016 03:30 AM, Michal Privoznik wrote:
>> Now that our cells in test driver are huge pages aware, we can
>> implement virNodeAllocPages. Basically there's just one catch. If
>> users specify startCell == -1, they want the huge pages change to
>> be stretched over all the nodes. Therefore we just recalculate
>> the change they want to make to each node and continue.
>>
>> Signed-off-by: Michal Privoznik <mprivozn at redhat.com>
>> ---
>>  src/test/test_driver.c | 78 ++++++++++++++++++++++++++++++++++++++++++++++++++
>>  1 file changed, 78 insertions(+)
>>
>> diff --git a/src/test/test_driver.c b/src/test/test_driver.c
>> index a3f74f8..d9e408e 100644
>> --- a/src/test/test_driver.c
>> +++ b/src/test/test_driver.c
>> @@ -2867,6 +2867,83 @@ testNodeGetFreePages(virConnectPtr conn,
>>      return ret;
>>  }
>>  
>> +static int
>> +testNodeAllocPages(virConnectPtr conn,
>> +                   unsigned int npages,
>> +                   unsigned int *pageSizes,
>> +                   unsigned long long *pageCounts,
>> +                   int startCell,
>> +                   unsigned int cellCount,
>> +                   unsigned int flags)
>> +{
>> +    testDriverPtr privconn = conn->privateData;
>> +    bool add = !(flags & VIR_NODE_ALLOC_PAGES_SET);
>> +    ssize_t i, ncounts = 0;
>> +    size_t j;
>> +    int lastCell;
>> +    int ret = -1;
>> +
>> +    virCheckFlags(VIR_NODE_ALLOC_PAGES_SET, -1);
>> +
>> +    testDriverLock(privconn);
>> +
>> +    lastCell = privconn->numCells - 1;
> 
> lastCell would be 1 then.
> 
>> +
>> +    if (startCell < -1 || startCell > lastCell) {
>> +        virReportError(VIR_ERR_INTERNAL_ERROR,
>> +                       _("start cell %d out of range (0-%d)"),
>> +                       startCell, lastCell);
>> +        goto cleanup;
>> +    }
>> +
>> +    lastCell = MIN(lastCell, startCell + (int) cellCount - 1);
> 
> if cellCount is 2, then lastCell = MIN(1, -1 + 2 - 1) ?

Ah. So I haven't though of the case where user wants both behaviours:
startcell = -1 cellCount = 2. He/she is basically telling us: in the
first step I want you to allocate the pages through all NUMA nodes, and
in the second step, I want you to allocate some additional pages on node
#0. Damn, this is gonna be twice as complicated. Okay, let me rework
this and post v2.

> 
> 
> Does the following hunk belong before the startCell < -1 if?
> 
>> +
>> +    if (startCell == -1) {
>> +        /* Okay, hold on to your hats because this is gonna get wild.
>> +         * startCell == -1 means that user wants us to allocate
>> +         * pages over all NUMA nodes proportionally. Just
>> +         * recalculate the pageCounts and we should be good. */
> 
> Color me confounded.
> 
>> +        for (j = 0; j < npages; j++)
>> +            pageCounts[j] /= privconn->numCells;
> 
> Why does this work?  pageCounts[j] would essentially be divided by 2,
> but isn't that an input value?  How does that make us ensure we're
> pulling "equally" from across all cells.  That is what's the difference
> between supplying 0 or -1 for startCell?  I'm missing something...

Exactly. I mean, if user tells us the following:

npages = 1, pageSizes= {2048}, pageCounts = {678}, startCell = -1,
cellCount = 1

then they want to allocate 678 of 2M pages across the whole host. They
don't care whether they will be distributed equally throughout all NUMA
nodes. For what they know, all 678 pages can be allocated on node #0. Or
you know, any other combination that adds up to 678.
If, however, they care where the pages are allocated, they have to
provide startCell != -1. So for instance, if host has 2 NUMA nodes, and
they want to allocate 678 pages in total and have them distributed
equally between the two nodes:

npages =1, pageSizes = {2048}, pageCounts = {339}, startCell = 0,
cellCount = 2

Yup, it is very complicated API, because it allows you to allocate more
page sizes across multiple NUMA nodes.

Anyway, because this is all emulated in our test driver (and we
specifically document that we guarantee no equal distribution for
startCell = -1 case), we can distribute the allocation as we please. So
I went with equal distribution.

> 
>> +        startCell = 0;
>> +        lastCell = privconn->numCells - 1;
> 
> So this doesn't change from our default?
> 
>> +    }
>> +
> 
> You could probably explain the following to me 5 different ways and each
> time I'd probably respond, huh?
> 
>> +    for (i = startCell; i <= lastCell; i++) {
>> +        testCellPtr cell = &privconn->cells[i];
>> +        size_t k;
>> +
>> +        for (j = 0; j < cell->npages; j++) {
>> +            for (k = 0; k < npages; k++) {
>> +                unsigned long long pagesFree = cell->pages[j].pagesFree;
>> +
>> +                if (pageSizes[k] != cell->pages[j].pagesize)
>> +                    continue;
> 
> Going back to patch 3, this seems to match what I was thinking...
> 
> The rest of this I guess I'm not sure what to say other than I'm lost.
> 
> 
> John
> 
>> +
>> +                if (add)
>> +                    pagesFree += pageCounts[k];
>> +                else
>> +                    pagesFree = pageCounts[k];
>> +
>> +                if (pagesFree > cell->pages[j].pages) {
>> +                    virReportError(VIR_ERR_OPERATION_FAILED,
>> +                                   _("Unable to allocate %llu pages"),
>> +                                   pagesFree);
>> +                    goto cleanup;
>> +                }
>> +
>> +                cell->pages[j].pagesFree = pagesFree;
>> +                ncounts++;

Oh, just figured. I need to modify the pool allocation size, not how
much free pages is there. That is I need to modify cell-pages[j].pages.
And also, I should probably recalculate the pagesFree then...
Sigh. It is complicated and easy to get lost in.

Will send v2.

Michal




More information about the libvir-list mailing list