[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

[libvirt] PATCH: Parallelize OOM testing



The OOM testing takes a long time to run. Add in valgrind, and it takes a
very very very long time to run. My dev box has 8 cpus though and it'd be
nice to use all 8 rather than just 1 to cut the time by a factor of 8. So
this patch tweaks the control harness for the OOM testrig so that it can
fork off multiple worker processes. Each worker is assigned a number from
1 -> n, and will be responsible for test all allocations failures divisible
by their number. 

By default it still runs single process, but if you set VIR_TEST_MP=1 then
we use sysconf() to get the actual number of online CPUs and fork off one
worker per CPU.

eg, so to run the OOM testing with valgrind, parallelized

  $ VIR_TEST_MP=1 VIR_TEST_OOM=1 make valgrind

Or to run a test directly

  $ VIR_TEST_MP=1 VIR_TEST_OOM=1 valgrind --leak-check=full ./xmconfigtest

Daniel

diff -r 3548706d639f tests/testutils.c
--- a/tests/testutils.c	Thu Jul 03 14:24:20 2008 +0100
+++ b/tests/testutils.c	Thu Jul 03 14:24:40 2008 +0100
@@ -330,7 +330,9 @@
     int n;
     char *oomStr = NULL, *debugStr;
     int oomCount;
-
+    int mp = 0;
+    pid_t *workers;
+    int worker = 0;
     if ((debugStr = getenv("VIR_TEST_DEBUG")) != NULL) {
         if (virStrToLong_ui(debugStr, NULL, 10, &testDebug) < 0)
             testDebug = 0;
@@ -344,6 +346,13 @@
             oomCount = 0;
         if (oomCount)
             testOOM = 1;
+    }
+
+    if (getenv("VIR_TEST_MP") != NULL) {
+        mp = sysconf(_SC_NPROCESSORS_ONLN);
+        fprintf(stderr, "Using %d worker processes\n", mp);
+        if (VIR_ALLOC_N(workers, mp) < 0)
+            return EXIT_FAILURE;
     }
 
     if (testOOM)
@@ -371,11 +380,27 @@
         else
             fprintf(stderr, "%d) OOM of %d allocs ", testCounter, approxAlloc);
 
+        if (mp) {
+            int i;
+            for (i = 0 ; i < mp ; i++) {
+                workers[i] = fork();
+                if (workers[i] == 0) {
+                    worker = i + 1;
+                    break;
+                }
+            }
+        }
+
         /* Run once for each alloc, failing a different one
            and validating that the test case failed */
-        for (n = 0; n < approxAlloc ; n++) {
+        for (n = 0; n < approxAlloc && (!mp || worker) ; n++) {
+            if ((n % mp) != (worker - 1))
+                continue;
             if (!testDebug) {
-                fprintf(stderr, ".");
+                if (mp)
+                    fprintf(stderr, "%d", worker);
+                else
+                    fprintf(stderr, ".");
                 fflush(stderr);
             }
             virAllocTestOOM(n+1, oomCount);
@@ -383,6 +408,20 @@
             if (((func)(argc, argv)) != EXIT_FAILURE) {
                 ret = EXIT_FAILURE;
                 break;
+            }
+        }
+
+        if (mp) {
+            if (worker) {
+                _exit(ret);
+            } else {
+                int i, status;
+                for (i = 0 ; i < mp ; i++) {
+                    waitpid(workers[i], &status, 0);
+                    if (WEXITSTATUS(status) != EXIT_SUCCESS)
+                        ret = EXIT_FAILURE;
+                }
+                VIR_FREE(workers);
             }
         }
 

-- 
|: Red Hat, Engineering, London   -o-   http://people.redhat.com/berrange/ :|
|: http://libvirt.org  -o-  http://virt-manager.org  -o-  http://ovirt.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: GnuPG: 7D3B9505  -o-  F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]