extras-buildsys README,1.6,1.7

Seth Vidal (skvidal) fedora-extras-commits at redhat.com
Sun Jul 10 05:13:49 UTC 2005


Author: skvidal

Update of /cvs/fedora/extras-buildsys
In directory cvs-int.fedora.redhat.com:/tmp/cvs-serv21870

Modified Files:
	README 
Log Message:

fix some minor typos and put linebreaks into README



Index: README
===================================================================
RCS file: /cvs/fedora/extras-buildsys/README,v
retrieving revision 1.6
retrieving revision 1.7
diff -u -r1.6 -r1.7
--- README	7 Jul 2005 04:05:46 -0000	1.6
+++ README	10 Jul 2005 05:13:47 -0000	1.7
@@ -2,57 +2,88 @@
 
 System Requirements:
     - Python 2.3 or 2.4
-    - m2crypto as long as it's not 0.10
+    - pyOpenSSL
     - sqlite python bindings
-    - mock (http://www.dulug.duke.edu/~skvidal/mock/)
-
+    - mock (http://linux.duke.edu/~skvidal/mock/)
+    - createrepo
 
 Getting Started
 ------------------------------------------
 
-To allow users to retrieve logs and other status, you need to run an HTTP server that allows access to the result dir (the 'server_work_dir' config option).
-
-You will also need to set up the infrastructure for the yum repository that the builders connect to and retrieve the latest packages.  This can be either HTTP, NFS, SMB, etc.  You then need to point 'yum' to this repo in step (5) of the Builder Setup.
+To allow users to retrieve logs and other status, you need to run an HTTP 
+server that allows access to the result dir (the 'server_work_dir' config 
+option).
+
+You will also need to set up the infrastructure for the yum repository that the 
+builders connect to and retrieve the latest packages.  This can be either HTTP, 
+NFS, SMB, etc.  You then need to point 'yum' to this repo in step (5) of the 
+Builder Setup.
 
 
 Builder Setup:
 1) In the CVS checkout directory on the client, execute:
 	make DESTDIR=/ install
-2) Create a user for the builder.  The builder drops root privileges before running the build, but must have root to be able to do initial setup
-3) Copy the client Key, Cert, and CA Cert to the /etc/plague/builder/certs directory
+2) Create a user for the builder.  The builder drops root privileges before 
+   running the build, but must have root to be able to do initial setup
+3) Copy the client Key, Cert, and CA Cert to the /etc/plague/builder/certs 
+   directory
 4) Things to modify in the client's CONFIG.py:
-    - Modify the 'distro' and 'repo' options to match the targets you've configured in /etc/mock/.  These targets are usually in the form of "distro-target-arch-repo".  'arch' and 'target' are passed by the build system dynamically, but 'distro' and 'repo' are hardcoded in the config file.  Examples are "fedora-development-i386-core" and "fedora-development-i386-extras".
-5) Configure the mock target files in /etc/mock.  You only need one target file for each major arch you support.  For example, you don't need separate 'ia32e' or 'amd64' config files, since these just use the normal 'x86_64' config file
+    - Modify the 'distro' and 'repo' options to match the targets you've 
+    configured in /etc/mock/.  These targets are usually in the form of 
+    "distro-target-arch-repo".  'arch' and 'target' are passed by the build 
+    system dynamically, but 'distro' and 'repo' are hardcoded in the config 
+    file.  Examples are "fedora-development-i386-core" and 
+    "fedora-development-i386-extras".
+5) Configure the mock target files in /etc/mock.  You only need one target 
+   file for each major arch you support.  For example, you don't need separate 
+   'ia32e' or 'amd64' config files, since these just use the normal 'x86_64' 
+   config file
 6) Start the client. ex: "/usr/bin/plague-builder 127.0.0.1 i386 i686"
 
 
 On the Server:
 
-1) Follow the instructions at the bottom of this file titled "Configuring SSL for your Build System"
+1) Follow the instructions at the bottom of this file titled "Configuring SSL 
+   for your Build System"
 2) In the CVS checkout directory, execute:
 	make DESTDIR=/ install
-3) Copy the server Key, Cert, and CA Cert to the /etc/plague/server/certs directory
-4) Copy the client authentication CA Cert to the /etc/plague/server/certs directory
+3) Copy the server Key, Cert, and CA Cert to the /etc/plague/server/certs 
+   directory
+4) Copy the client authentication CA Cert to the /etc/plague/server/certs 
+   directory
 5) Things to modify in the server's CONFIG.py:
-    - Update the Key, Cert, and CA Cert, and client auth CA Cert file options to point to the files in steps 3 and 4
-    - Modify the 'targets' option to add/remove the arches and targets you'll be building
-    - Modify the 'builders' option to point to the build clients you'll be using.  Note the "https".
-    - If you want to do simple SRPM builds, set the 'use_srpm_not_cvs' option to true
+    - Update the Key, Cert, and CA Cert, and client auth CA Cert file options 
+    to point to the files in steps 3 and 4
+    - Modify the 'targets' option to add/remove the arches and targets you'll 
+    be building
+    - Modify the 'builders' option to point to the build clients you'll be 
+    using.  Note the "https".
+    - If you want to do simple SRPM builds, set the 'use_srpm_not_cvs' option 
+    to true
 6) Start the server.  ex: "/usr/bin/plague-server 127.0.0.1"
 
 
 
 Operation:
 
-1) You must add a user account for any user who wishes to use the build system.  This is accomplished with the 'plague-user-manager.py' tool, installed by default in /usr/bin.  You add a user like this:
-	/usr/bin/plague-user-manager.py /etc/plague/server/userdb add dcbw at redhat.com own_jobs kill_any_job modify_users server_admin
-2) Clients then run plague-client to queue jobs.  When first run, plague-client creates the ~/.plague-client.cfg file
-	- Point the client to the server's address
+1) You must add a user account for any user who wishes to use the build system.
+  This is accomplished with the 'plague-user-manager.py' tool, installed by 
+  default in /usr/bin.  You add a user like this:
+	/usr/bin/plague-user-manager.py /etc/plague/server/userdb add \
+	       dcbw at redhat.com own_jobs kill_any_job modify_users server_admin
+	       
+2) Clients then run plague-client to queue jobs.  When first run, 
+   plague-client creates the ~/.plague-client.cfg file
+   	- Point the client to the server's address
 	- Point the client to the correct certificates
-	- Make sure you change the email address in ~/.plague-client to match that of the 'user-cert' certificate
+	- Make sure you change the email address in ~/.plague-client to match 
+	  that of the 'user-cert' certificate
+
 3) To build a package, you use plague-client like so:
 	/usr/bin/plague-client build ethtool /home/dcbw/ethtool-1.8-4.src.rpm devel
-4) If the client returns "Package ethtool enqueued." then the enqueue was successful
+
+4) If the client returns "Package ethtool enqueued." then the enqueue was 
+   successful
 
 You can list your own jobs with:
 	/usr/bin/plague-client list
@@ -63,7 +94,11 @@
 Architectural Overview:
 ------------------------------------------
 
-The build system is composed of a single build server, and multiple build clients.  Clients run an XMLRPC server to which the build-server delivers build jobs.  The build server runs an XMLRPC server to allow submission of jobs, and to retrieve basic status information about both clients and the build system as a whole.
+The build system is composed of a single build server, and multiple build 
+clients.  Clients run an XMLRPC server to which the build-server delivers 
+build jobs.  The build server runs an XMLRPC server to allow submission of 
+jobs, and to retrieve basic status information about both clients and the 
+build system as a whole.
 
 
 
@@ -72,17 +107,40 @@
 usage: build-client <address> <architectures>
 ie   : build-client localhost sparc sparcv9 sparcv8
 
-Currently, build clients are limited to building one job at a time.  This limitation may be removed in the future.  They do not queue pending jobs, but will reject build requests when something is already building.  The build server is expected to queue and manage jobs at this time, and serialize requests to build clients.  This may not be the case in the future.
+Currently, build clients are limited to building one job at a time.  This 
+limitation may be removed in the future.  They do not queue pending jobs, 
+but will reject build requests when something is already building.  The build 
+server is expected to queue and manage jobs at this time, and serialize 
+requests to build clients.  This may not be the case in the future.
 
 main()
   `- Creates: XMLRPCBuildClientServer
-               `- Creates: i386Arch, x86_64Arch, PPCArch, etc (subclasses of BuildClientMach)
-
-The client creates an XMLRPC server object (XMLRPCBuildClientServer), and then processes requests in an infinite loop.  Every so often (currently 5 seconds) the server allows each build job that is still in process to update its status and perform work.  The XMLRPCBuildClientServer keeps a list of local build jobs, which are architecture specific, and forwards requests and commands for each job to that specific job, keyed off a unique id.
-
-Each build job (BuildClientMach and its architecture-specific subclasses like i386Arch) proceeds through a number of states.  Build jobs are periodically given time to do work (BuildClientMach.process()) by their BuildClientInstance (from XMLRPCBuildClientServer._process()), which is in turn periodically given time by the client's main loop.  During their processing time, build jobs check their state, see if any actions have completed, and advance to the next state if needed.  Communication with mock and retrieval of status from mock are done with popen2.Popen4() so that mock does not block the XMLRPC server from talking to the build server.
+               `- Creates: i386Arch, x86_64Arch, PPCArch, etc (subclasses 
+                  of BuildClientMach)
 
-All communication with the build server is done through SSL to ensure the identity of each party.  Both the XMLRPC server and the result file server are SSL-enabled, and require SSL certificates and keys to operate.  See later section in this document on how to configure SSL certificates for your build system.
+The client creates an XMLRPC server object (XMLRPCBuildClientServer), and 
+then processes requests in an infinite loop.  Every so often (currently 
+5 seconds) the server allows each build job that is still in process to 
+update its status and perform work.  The XMLRPCBuildClientServer keeps a 
+list of local build jobs, which are architecture specific, and forwards 
+requests and commands for each job to that specific job, keyed off a 
+unique id.
+
+Each build job (BuildClientMach and its architecture-specific subclasses like 
+i386Arch) proceeds through a number of states.  Build jobs are periodically 
+given time to do work (BuildClientMach.process()) by their BuildClientInstance 
+(from XMLRPCBuildClientServer._process()), which is in turn periodically given 
+time by the client's main loop.  During their processing time, build jobs 
+check their state, see if any actions have completed, and advance to the next 
+state if needed.  Communication with mock and retrieval of status from mock 
+are done with popen2.Popen4() so that mock does not block the XMLRPC server 
+from talking to the build server.
+
+All communication with the build server is done through SSL to ensure the 
+identity of each party.  Both the XMLRPC server and the result file server are 
+SSL-enabled, and require SSL certificates and keys to operate.  See later 
+section in this document on how to configure SSL certificates for your build 
+system.
 
 
 
@@ -90,23 +148,51 @@
 
 usage: build-server
 
-The build server runs two threads.  The first, the XMLRPC server (XMLRPCBuildMaster class), accepts requests to enqueue jobs for build and stuffs them into an sqlite database which contains all job details.  The second thread, the Build Master (BuildMaster class), pulls 'waiting' jobs from the database and builds them.  A third top-level object that runs in the same thread as the Build Master is the BuildClientManager, which keeps track of build clients (ArchWelders) and their status.
+The build server runs two threads.  The first, the XMLRPC server 
+(XMLRPCBuildMaster class), accepts requests to enqueue jobs for build and 
+stuffs them into an sqlite database which contains all job details.  The second 
+thread, the Build Master (BuildMaster class), pulls 'waiting' jobs from the 
+database and builds them.  A third top-level object that runs in the same 
+thread as the Build Master is the BuildClientManager, which keeps track of 
+build clients (ArchWelders) and their status.
 
 main()
   |- Creates: XMLRPCBuildMaster
   |- Creates: BuildClientManager
   |-           `- Creates: BuildClient (one for each remote build client)
-  |-                         `- Creates: BuildClientJob (one for each build job on each arch)
+  |-                         `- Creates: BuildClientJob (one for each build job 
+  |-                            on each arch)
   `- Creates: BuildMaster
                 `- Creates: BuildJob (one for each build job)
 
-The BuildClientManager object serves as a central location for all tracking and status information about each build job on each arch.  It creates an BuildClient instance for each remote build client.  The BuildClient instance keeps track of specific jobs building on all architectures on that remote build client.  It also serves as the XMLRPC client of the remote build client, proxying status information from it.
-
-BuildJobs must request that the BuildClientManager create a new BuildClientJob for each build on each architecture the BuildJob needs.  If there is an available build client (since build clients only build one job at a time across all arches they support), the BuildClientManager will pass the request to the arch-specific BuildClient instance, which creates the new arch-specific BuildClientJob, and pass it back through the BuildClientManager to the parent BuildJob.  If there is no available build client for the request, the BuildJob must periodically re-issue the build request to the BuildClientManager.
-
-BuildClientManager has a periodic processing routine that is called from the BuildMaster thread.  This processing routine calls the BuildClient.process() routine on each BuildClient instance, which in turn updates its view of the remote build client's status.  Thus, the BuildClientManager, through each BuildClient instance, knows the status and currently building job on each remote build client.
-
-BuildJobs track a single SRPM build through the entire build system.  They are created from the BuildMaster thread whenever the BuildMaster finds a job entry in the sqlite database with the status of 'waiting'.  BuildJobs proceed through a number of states: "initialize", "checkout", "make_srpm", "prep", "building", "finished", "cleanup", "failed", and "needsign".
+The BuildClientManager object serves as a central location for all tracking and 
+status information about each build job on each arch.  It creates a 
+BuildClient instance for each remote build client.  The BuildClient instance 
+keeps track of specific jobs building on all architectures on that remote 
+build client.  It also serves as the XMLRPC client of the remote build 
+client, proxying status information from it.
+
+BuildJobs must request that the BuildClientManager create a new BuildClientJob 
+for each build on each architecture the BuildJob needs.  If there is an 
+available build client (since build clients only build one job at a time 
+across all arches they support), the BuildClientManager will pass the request 
+to the arch-specific BuildClient instance, which creates the new arch-specific 
+BuildClientJob, and pass it back through the BuildClientManager to the parent 
+BuildJob.  If there is no available build client for the request, the BuildJob 
+must periodically re-issue the build request to the BuildClientManager.
+
+BuildClientManager has a periodic processing routine that is called from the 
+BuildMaster thread.  This processing routine calls the BuildClient.process() 
+routine on each BuildClient instance, which in turn updates its view of the 
+remote build client's status.  Thus, the BuildClientManager, through each 
+BuildClient instance, knows the status and currently building job on each 
+remote build client.
+
+BuildJobs track a single SRPM build through the entire build system.  They are 
+created from the BuildMaster thread whenever the BuildMaster finds a job entry 
+in the sqlite database with the status of 'waiting'.  BuildJobs proceed through
+a number of states: "initialize", "checkout", "make_srpm", "prep", "building", 
+"finished", "cleanup", "failed", and "needsign".
 
 Flow goes like this:
 
@@ -122,37 +208,55 @@
     - failed jobs? => failed
     - otherwise => needsign
 
-The BuildJob updates its status when it is periodically told to do so by the BuildManager.  At this point, it will advance to the next state, or spawn build jobs that have not yet started if build clients for those architectures are now available.  It stays in the "building" state until all jobs are first spawned, and then either completed or failed.
-
-All communication with build clients is done through SSL to ensure the identity of each party.  When the client requests the SRPM to build, SSL is used.  When the build server retrieves logs and RPMs from the build client, SSL is also used.  This ensures that build clients can be more or less trusted, or at least that some random build client is not serving you packages that might contaminate your repository.  See later section in this document on how to configure SSL certificates for your build system.
+The BuildJob updates its status when it is periodically told to do so by the 
+BuildManager.  At this point, it will advance to the next state, or spawn build 
+jobs that have not yet started if build clients for those architectures are 
+now available.  It stays in the "building" state until all jobs are first 
+spawned, and then either completed or failed.
+
+All communication with build clients is done through SSL to ensure the 
+identity of each party.  When the client requests the SRPM to build, SSL 
+is used.  When the build server retrieves logs and RPMs from the build client, 
+SSL is also used.  This ensures that build clients can be more or less trusted, 
+or at least that some random build client is not serving you packages that 
+might contaminate your repository.  See later section in this document on how 
+to configure SSL certificates for your build system.
 
 
 Configuring SSL for your Build System
 --------------------------------------
 
-When you set up the build system, you essentially become a Certificate Authority.
-Because the build server and the build clients communicate using SSL, they need
-to exchange certificates to verify the others' identity.  You must first create
-a key/cert pair for the Build System Certificate Authority, which signs both
-the build server's certificate, and each build client's certificate.
+When you set up the build system, you essentially become a Certificate 
+Authority. Because the build server and the build clients communicate using 
+SSL, they need to exchange certificates to verify the others' identity.  
+You must first createa key/cert pair for the Build System Certificate 
+Authority, which signs both the build server's certificate, and each build 
+client's certificate.
 
 
 The Certificates on the Server:
 config_opts['server_cert'] -> server SSL certificate
 config_opts['server_key'] -> server private key
-config_opts['ca_cert'] -> CA certificate used to sign both server and builder certificates
-config_opts['ui_ca_cert'] -> CA cert that signs package maintainer's certificates, used to verify connections from plague-clients are authorized
+config_opts['ca_cert'] -> CA certificate used to sign both server and builder 
+                          certificates
+config_opts['ui_ca_cert'] -> CA cert that signs package maintainer's 
+                             certificates, used to verify connections from 
+                             plague-clients are authorized
 
 The Certificates on the Builders:
 config_opts['client_cert'] -> builder SSL certificate
 config_opts['client_key'] -> builder private key
-config_opts['ca_cert'] -> _same_ as server's 'ca_cert', the CA certificate used to sign both server and builder certificates
+config_opts['ca_cert'] -> _same_ as server's 'ca_cert', the CA certificate 
+                          used to sign both server and builder certificates
 
-Package Maintainer certificates (used by /usr/bin/plague-client, from ~/.plague-client.cfg)
+Package Maintainer certificates (used by /usr/bin/plague-client, 
+from ~/.plague-client.cfg)
 server-ca-cert -> _same_ as server and client's 'ca_cert'
 user-ca-cert -> CA cert that signed the package maintainer's 'user-cert'
-user-key -> package maintainer's private key, can be blank if private key and certificate are in the same file
-user-cert -> package maintainer's certificate, signed by 'user-ca-cert' and sent to build server to validate the plague-client's connection
+user-key -> package maintainer's private key, can be blank if private key and 
+            certificate are in the same file
+user-cert -> package maintainer's certificate, signed by 'user-ca-cert' and 
+             sent to build server to validate the plague-client's connection
 
 
 Setting up the Build System Certificate Authority
@@ -173,7 +277,8 @@
 
 3. Generate the BSCA certificate
 
-openssl req -new -x509 -key private/cakey.pem -out cacert.pem -extensions v3_ca -days 3650
+openssl req -new -x509 -key private/cakey.pem -out cacert.pem \ 
+            -extensions v3_ca -days 3650
 
 
 4. Generate a build server key
@@ -208,7 +313,15 @@
 or more clients.  You may add clients using step 7 to create and sign their
 certificate requests.
 
-9. Copy server_cert.pem, server_key.pem, and cacert.pem to a directory on the build server.  IMPORTANT: make sure only the build server's user can read server_key.pem, since it is the server's private key.  Then, modify the server's CONFIG.py file and point the respective config options to the _full_ path to each file.
-
-10. Copy client1_cert.pem, client1_key.pem, and cacert.pem to a direcrory on the build client.  IMPORTANT: make sure only the build client's user can read client1_key.pem, since it is the client's private key.  Then, modify the client's CONFIG.py file and point the respective config options to the _full_ path to each file.
+9. Copy server_cert.pem, server_key.pem, and cacert.pem to a directory on the 
+build server.  IMPORTANT: make sure only the build server's user can read 
+server_key.pem, since it is the server's private key.  Then, modify the 
+server's CONFIG.py file and point the respective config options to the _full_ 
+path to each file.
+
+10. Copy client1_cert.pem, client1_key.pem, and cacert.pem to a direcrory on 
+the build client.  IMPORTANT: make sure only the build client's user can read 
+client1_key.pem, since it is the client's private key.  Then, modify the 
+client's CONFIG.py file and point the respective config options to the _full_ 
+path to each file.
 




More information about the fedora-extras-commits mailing list