[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: Release Engineering Meeting Recap from Monday 16-APR-07

On Tue, Apr 17, 2007 at 06:58:52AM +0200, Thorsten Leemhuis wrote:
> >  1. In the future we should consider a mass rebuild of all packages 
> >around, but no later than test2
> Hmmm, this was discussed in depth at the end of this thread:
> https://www.redhat.com/archives/fedora-packaging/2007-April/msg00017.html
> Some people would like to see a mass rebuild, some others are against it.
> I'm one of those against it. Reasons:

I'm one of those for it.

> - Seems we have quite some users in country were internet bandwidth is 
> unreliable and costly. If we mass-rebuild everything each time those 
> users have to download a lot of new stuff where nothing changed besides 
> the release. that makes Fedora harder to use for them.

The big players that are always on the user's choicelist are being
rebuilt almost by definition. The savings are therefore marginal at

> - the update process gets much longer for each and everyone of us if 
> each package has to be downloaded and updated.
> - the packages out in the wild are tested and known to work. Rebuild 
> packages have to proof again that everything is fine (which should be 
> the case most of the time, but in rare cases isn't)

That's a contradiction: Either the packages are stable and will
survive a rebuild, or they are fragile enough to break apart if they
are rebuilt in the current environment. Furthermore the testing these
packages have received was on another release offering a different
build and run-time environment, so they may not be as stable or tested
as you think they are.

The dangers of letting packages go to seed are the following:

o non deterministic package rebuilds: It is not guaranteed that some
  package will rebuild and function the same for the current release
  (we may have automated rebuild facilities, but no one tests runtime
  behaviour), a simple rebuild may unearth that the package needs
  further attention (in fact that is Thorsten's argument to not
  rebuild, but the attention will be needed, the question is when to
  spend the time on it, see below)

o slow security responses: A one-line fix may result to a package
  breaking due to the above. This means that security issues may start
  shipping broken packages out, or may require more time in QA to
  ensure that an ancient-not-rebuilt package really properly works.

o The choice of what to rebuild or not requires more developer time
  than fixing broken rebuilds: Currently some heuristics were uses to
  cherry-pick what to rebuild. This requires a careful examination
  that if doen properly consumes as much or more developer time than
  to fix any broken rebuilds. If not done careful, then some
  dependencies will be missed. For example the current upgrade sees
  the following changes in the buildtools

                 FC6               F7
   gcc           4.1.1-30          4.1.2-8
   glibc         2.5-3             2.5.90-20

  Perhaps the gcc or binutils changes are not that big, but the glibc
  ones seem to be, e.g. 2.5.90 is the prequel to 2.6 and just checking
  the API (the glibc-headers) gives:

   41 files changed, 297 insertions(+), 220 deletions(-)

  Other examples are packages (like bridge-utils) being built against
  kernel-headers. F7 is now shipping a bridge-utils that was built
  agains 2.6.18 kernel headers at the very beginning of the FC6
  cycle. The questions that come up are: Did anyone check whether the
  bridging interface of the kernel changed between 2.6.18-21? Were any
  interfaces deprecated? Will bridge-utils work on F7, if not will a
  rebuild suffice?

  I've picked bridge-utils as an example as it was the first package
  that looked suspicious when I looked at an alphabetical list, that
  doesn't mean that bridge-utils is now broken. Still the questions
  raised are valid for any package depending on kernel-headers.

o Moving bugs from development cycle to maintenance cycle: Effectively
  the argument for not rebuilding during development time and breaking
  N fragile packages means that once these N broken packages are
  spotted after the release they will need the developer's attention
  just the same. We are only moving the bugs from the devlopment cycle
  to mainenance. Do we really want that? From a technical and
  *marketing* POV we don't. It is better to ship a good release from
  the start, than to stumble over rebuild bugs over and over
  again, and that includes both deelopers and now users. And from a
  *business* POV the resources that are spent are the same, someone
  must fix the packages. So let's do it during the development cycle
  instead of during maintenance where the users will feel like guinea

In a nutshell: There are no significant savings in user downloads, and
there are no savings in developer resources when not rebuilding
packages to match the upcoming release environment. But there is bad
publicity if packages will break after the release and this could had
been prevented with a simple mass rebuild during devlopment time
instead of outsourcing this to the maintenance cycle.
Axel.Thimm at ATrpms.net

Attachment: pgpnDMRexKw5C.pgp
Description: PGP signature

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]