[olpc-software] Package manager stuff

Mike Hearn mike at plan99.net
Tue Mar 14 18:02:36 UTC 2006


Hi,

Somebody pointed out the 'no package management' thread to me as they
thought it might be relevant to autopackage which does indeed focus on
being easy for users with zero UNIX experience (but still handles
dependencies etc). I don't know exactly what hardware OLPC machines will
have, so forgive me if I make some bad assumptions!

Anyhow, lately I've been thinking about some new distro design ideas
that would solve Chris Blizzards use cases:

* Kids want to be able to share software with each other easily
* It must be super-robust. RPM/apt technologies are just not robust
  enough
* Teachers must be able to easily distribute software to kids without
  a network

What I wanted to try is simple enough. The base operating system/desktop
platform is installed as per normal, then /usr, /etc and /var are moved
to /system/usr, /system/etc and /system/var. A new /applications
directory is created and is monitored by a simple program that registers
a watch on it. As subdirectories come and go, they are union mounted
over /usr which becomes a composite of the system and each individual
piece of software on the system. /applications is not the only place
that can be thus monitored - removable media and net mounts can be too.

So basically:

/bin            ]
/proc           ]  - nothing changes 
/dev            ]

/system/usr     ]
/system/etc     ]  - these contain gnome/X/online help/extra stuff
/system/var     ]

/applications/Inkscape 0.42.app/bin/inkscape
/applications/Inkscape 0.42.app/share/inkscape/datastuff.xml
/applications/Inkscape 0.42.app/share/applications/inkscape.desktop

/applications/LearnWithMe 1.0.app/bin/learn-with-me
/applications/LearnWithMe 1.0.app/share/icons/applications/scalable/learnwithme.svg

/media/usbkey/Some Program.app/bin/whatever

/usr    ] - this is a union composite of /system, /applications/* and
            any .app directories found on CD-ROMs, USB Keys or network
            mounts. System always takes precedence over network mounts,
            network mounts always override removable media,
            removable media always overrides /applications.

There is no package manager database, because the filing system
namespaces can replace it entirely.

This scheme may appear exotic but solves the following problems:

* End users no longer need root access to install software. Simply
  putting a .app directory somewhere watched by the new daemon is
  enough. This clean separation of application and operating system
  improves security and reduces likelyhood of mistakes.

* It's less likely users will install something that messes up their
  system because the base OS will always override everything else in
  the event of namespace conflicts.

* It's easy to use, uninstall is just dragging the application to the
  trash can or deleting it using the menus.

* You can have multiple versions of programs installed at once. Useful
  when you aren't sure if the new version will meet your needs or not
  (or for people testing alphas/betas).

* Chris' use case of application sharing works "just works" - put the
  USB key in the computer and the application instantly appears in the
  menus, is linked to file associations, can be registered with DBUS etc
  just through the act of merging the apps files into the main
  namespace. 

* Teachers can distribute applications to end users using removable
  media or  a network link. No complex set up is required,
  as long as the operating system can "see" the directory where the
  applications are they will be merged in correctly.

* It eliminates the need for a package manager [database] because the 
  filing system can now be queried on multiple relations.

* It "solves" (hacks around) the lack of binary relocatability in most
  apps, which will assume they are installed to /usr and break in subtle
  ways if they aren't. OTOH I want to fix this using kernel extensions
  for autopackage anyway.

Whether some extra installation mechanism is necessary or not depends on
your other goals: for instance, do you want people to be able to type in
"linux photo editor" into google and double click to install the gimp?
Or do you want to build a repository of software packages for the OLPC
distro? Do you care about online updates, or will that cause more
problems than it solves in this environment?

Unsurprisingly, my personal feeling on this is  that having a repository
for OLPC laptops is silly and that it'd be better to share binary
autopackages with other distros. That way (at least in theory) more
software will be available sooner because any Linux user can take part
in building packages. That way the OLPC project can focus engineering
time on actually fixing the OS rather than wrapping programs into OLPC
specific RPMs/DEBs.

When you drop apt/rpm you lose online updates, which may need to be
brought back some other way. The approach I was thinking of is switching
to unified "update packs" that are LZMA compressed tarballs containing a
set of binary deltas and some HTML release notes. A single update pack
may make many changes to the base operating system, but they would
probably revolve around a single 'theme'. Update packs are numbered
sequentially and can be downloaded+distributed to end users via
removable media easily. They'd be downloaded and applied automatically
by default. This scheme improves bandwidth efficiency over the current
system used in Fedora.

Application updates can be handled by the scheme used to install the
applications themselves - in my scratch notes autopackage handled it,
but ZeroInstall would work too.

Jim raised the issue of dependencies and efficiency. Having seen the
mess that the non-UNIX-guru community gets into through our usage of
programs like apt and yum, my own view is that it's better to risk
sacrificing a small amount of efficiency rather than risk a usability
disaster. Technology will improve with time, but one only needs browse
linuxquestions.org or ubuntuforums.org for a few hours to see the sticky
situations people get themselves into through our [ab]use of depsolvers
today.

IMHO a well designed platform which includes libraries by stability and
popularity would ensure that 90% of common app dependencies will already
be on the system. Along with a combination of static linking, smart dead
code elimination and weak linkage this can ensure minimal loss of
efficiency.

thanks -mike




More information about the olpc-software mailing list