[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: Heads-up: brand new RPM version about to hit rawhide



Kevin Kofler wrote:
Andreas Ericsson <ae <at> op5.se> writes:
* make releases without tags: For example, the weekly trunk snapshots of
KDE don't get tags, nor do the extragear tarball releases.
I'm not sure if you saw my email regarding the requirements on the SCM for it
to be useful in the scenario Doug Ledford proposes, but right at the top of
the list comes the ability to uniquely name one particular commit. If you
have that, you don't need tags.

The problem with commit IDs is that they're a lot less readable and intuitive than tags.


True that. I thought the ID's were going to be used by computers though, and
packages would still have version numbers.

It would be extremely poor project policy to move a tag after it's made
public

Are you trying to imply that KDE has "extremely poor project policy"?

If a the code corresponding to a particular version can be changed once
released publicly, then yes.

I think it only makes sense to do respins that way. A release isn't always perfect on the first try.


I don't. I think a much saner approach would be to set a new tag when a new
release is done. When something is tagged as "v4.0.0" (or whatever), people
expect that version to damn well have the same code as it did last week.
Otherwise you can have kde-4.0.0.tar.gz with one particular bug and
kde-4.0.0.tar.gz next week where that bug is missing (but something
else is broken). Calling the first package kde-4.0.0rc1.tar.gz would make
sense though, or calling the second one kde-4.0.1 would work just as well.
Giving several different versions of the same package the same version
number is just broken. It'd be like amazon and google sharing the same
ip.

For centralized scms, moving tags doesn't matter in the slightest, since
they can't name a commit uniquely anyway.

That's not true, a SVN commit is uniquely named by the revision number.

If you have a history looking like this:

A--B--C--D
      \
       E

Both E and D have the same revision number. As such, revision numbers are
not unique.

And in fact, being centralized allows SVN to actually use totally-ordered numbers, not random IDs coming from the checksum of something (which are bound to break the day you run into a collision, by the way).


True that. Current maths suggests that with the current commit-tempo to the kernel
(10487 commits between 2.6.25 and 2.6.26, most of them merges), we'll run into the
first SHA1 collision a mere 16 billion years after the calculated end of the
universe. I can see how that's a real problem....

The best way to reproducibly name a tag in SVN would be to use the tag name (which would be part of the URL)

So what happens when the repository moves? Then you no longer have the same
url and you need to trust the new repository location to have done an exact
clone of the source repository. Since there are no commit checksums, you
can't know that the version you get from the new repo is the same as you
would have gotten from the old one.

Oh, and the original repo can get modified (rewritten; it's not supposed to
happen, but it happens). Since there are no checksums, you're left flying
blind and just hoping.

_and_ the revision number. Tags in SVN are just directories, so you can use the directory at the given revision and you'll get the exact thing you originally checked out.

Me, being mostly a user who also happens to be a programmer, would love
to have an easy way to be able to get a clone of <insert-package-here>,
find the sources corresponding exactly to my version of the package and
then fix whatever issues I have with it. Even if it was just me willing
to do that (which I highly doubt), you'd have a net gain of one extra
spare-time developer. You can't possibly argue that making it easier for
casual developers to get involved is a bad thing.

With the current system, you just have to check out the package and run "make" in one of the branches to get all source files (usually just a tarball) downloaded from the lookaside cache. Extract the tarball and you have your sources. The patches aren't applied there, but that's by design. Patches are supposed to be independent, so (with some exceptions, e.g. the kernel which often includes series of interdependent patches automatically generated from some git repository) we develop each patch against the _original_ sources, then only when actually building the package we apply them all.


That's nice and allowing for maximum flexibility, but what I'm guessing 99% of
the users want is to be able to easily get the source code they're running so
they can start fiddling with it.

My workflow when I develop a patch for KDE is:
1. I extract the tarball (e.g. kdebase-workspace-4.0.98.tar.bz2).
2. This creates a directory (e.g. kdebase-workspace-4.0.98).
3. I copy this directory, appending the name of the patch I intend to write (e.g. kdebase-workspace-4.0.98-consolekit-kdm). 4. I make my changes in that directory (e.g. kdebase-workspace-4.0.98-consolekit-kdm). 5. I diff the original vs. the patched directory (e.g. diff -Nur kdebase-workspace-4.0.98 kdebase-workspace-4.0.98-consolekit-kdm
kdebase-workspace-4.0.98-consolekit-kdm.patch).
6. I copy the patch to the package branch.
7. I cvs add it to the repository.
8. I apply it in the specfile (2 lines, e.g. Patch1: kdebase-workspace-4.0.98-consolekit-kdm.patch in the header and %patch1 -p1 -b .consolekit-kdm in the %prep section).
9. I commit.
10. I run make tag.
11. I run make build BUILD_FLAGS=--nowait.
12. Koji sends me the results.
13. If the build failed, I:
* repeat steps 4, 5, 6 and 9
* run make force-tag
* resubmit the failed build through the Koji web interface
as often as needed until it builds.


You poor bastard.

I know this sounds overly complicated, but keep in mind that most of these steps are just a couple of mouse clicks or one line in a terminal, I intentionally detailed them so a beginner can understand what's going on. I'm not sure a SCM with fully-exploded sources would really make that easier.



My workflow when developing a feature for our own projects is:
1. git checkout -b <feature>
2. edit, edit, edit
3. commit
4. goto 2 as necessary
5. git checkout <master|maint|next> && git merge feature
6. git push
7. magic makes the package build and install on one machine of every
  supported arch, sending me an email of the results


If I do it for some project where I'm not upstream, I do:
repeat 1-4
5. git submit master..feature


In short, I deal with a single tool to handle everything regarding
development. If I need patches kept around for a while, I keep them
as changesets in the scm and regenerate them when needed. If I miss
the merge-window for an upstream feature-release, I can simply let
the tool move my patches to where they belong when the next merge-
window is about to open, rather than having to maintain separate
patch-files which I need to handle manually, which is tedious,
timeconsuming and error-prone.

Like Doug says, maintaining up to 5 or so patches that are all sort
of small (say, less than 2000 lines of diff) is manageable, but it's
not as easy as it could be. I gave up on it completely when I had
35 rather small patches to juggle and had to re-apply them for every
new version we released of an upstream project. Earlier I used to
spend 1-2 full days forward-porting those patches. Now I do it in
0.1-15 minutes, depending on conflicts.

--
Andreas Ericsson                   andreas ericsson op5 se
OP5 AB                             www.op5.se
Tel: +46 8-230225                  Fax: +46 8-230231


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]