[Date Prev][Date Next] [Thread Prev][Thread Next]
Re: [Fedora-livecd-list] [RFC/PATCH] livecd rebootless installer
- From: Jonathan Steffan <jonathansteffan gmail com>
- To: fedora-livecd-list redhat com
- Subject: Re: [Fedora-livecd-list] [RFC/PATCH] livecd rebootless installer
- Date: Mon, 09 Jul 2007 02:26:06 -0600
I have a use for something like this.
On Sun, 2007-07-08 at 21:51 -0500, Douglas McClendon wrote:
> Normal Fedora 7 Installed On Hard Disk System
> (assume just 1 non lvm partition /dev/sda1)
> -The bios loads the grub boot loader from the MBR of /dev/sda
> -grub knows how to read it's config from /dev/sda1:/boot/grub/grub.conf
> -grub is configured to boot a specific kernel+ramdisk+appendstring, namely
> /dev/sda1:/boot/vmlinuz-someversion, /dev/sda1:/boot/initrd-someversion,
> -control is thusly passed to the kernel, and the kernel then gunzips and
> extracts the cpio of the specified initrd (which I think grub copied to a well
> known place in ram. Only reason I might know this is because qemu's crafty
> -initrd feature screwed it up for larger initrds recently, though it has been fixed)
> -the kernel having then extracted the contents into a ram based filesystem,
> passes control to /init (or maybe /sbin/init, or maybe whatever init= was
> specified on the cmdline).
> -now the fun starts. This init is a nash or a bash script, whose job it is to
> mount the real root filesystem (e.g. the ext3fs on /dev/sda1) and then
> pivot_root to it.
> -finally, control is passed to /sbin/init on the real disk-based root
> filesystem, at which point the contents of the well known /etc/inittab start to
> Now then, what the Fedora-7 livecd does is along these lines
> -instead of the bios booting grub loaded on the mbr of a disk, it boots grub(or
> perhaps isolinux) from the bootsector of the cdrom.
- We boot from a local flash, (random example:
> -this bootloader behaves much like above, but pulling a
> kernel+initrd+append_args from some place on the cdrom.
> -now the fun begins after the initrd is extracted and mounted in a ram based fs
> as normal. An entirely different /init script within the initrd will go about
> the business of mounting the 'real root filesystem'. In this case, first the
> cdrom's iso9660 fs is mounted. Then a squashfs image file is loopback mounted
> from within the iso9660 fs. Then a sparse ext3fs image file is loopback mounted
> from within the squashfs.
Before the squashfs is loop mounted, we do some checks on it... most
likely a checksum of sorts. At this point we test if this is the latest
image. If not, we pull the latest image to a ram drive and loop mount
that instead of the local squashfs image. I have assumed we would be
able to have some sort of network access at this stage.
> - now the REAL fun begins. A ram based filesystem is created. A sparse file
> overlay is created within it. Now a device mapper snapshot is created using the
> read-only ext3fs image, and the read-write overlay file. (I'm skipping some
> loopback device associations, and in general probably misnaming a few things, as
> this is unashamedly from the hip, and not suitable to be published). Now, this
> magic devicemapper snapshotted device appears as /dev/mapper/live-rw, and
> appears to be a read-write ext3 filesystem, except the writes really get tucked
> away in ram (which is going to eat away at your ram).
> - That /dev/mapper/live-rw gets mounted as the 'real root filesystem', and
> pivot_root is called, and then things progress as normal.
If the squashfs image has not changed, we pivot_root right away.
> Now then- what the rebootless installer patch does differently
> -Just after the /dev/mapper/live-rw gets set up in the initrd, instead of
> mounting it as the real root filesystem, it gets used to create a raid1
> 'mirror'. quotes because in this case the 'mirror' only has 1 device, rather
> than the usual 2. The mirror is visible as /dev/md7, and THAT gets mounted as
> the real root filesystem, before pivot_root is called, and everything progresses
> as normal.
Neat. Could we create the raid1 from a new loop mounted squashfs that
has just been loaded into ram?
> But because of /dev/md7 being the 'real root filesystem', long after boot, you
> can hot-add another device to the mirror, in this case, the target volume that
> you want to install the system on (e.g. /dev/sda1). After you hot-add, the
> raid/md driver starts synchronizing the data from /dev/mapper/live-rw to
I would want to sync back to the flash at this point.
> When this finishes, you can hot-remove /dev/mapper/live-rw, at which
> point the system is running from /dev/sda1, just as if you had installed there
> and rebooted (with the caviat that there is this /dev/md7 layer sitting there
> until the next reboot). And once /dev/mapper/live-rw is removed from the
> /dev/md7 array, the resources that constructed it (i.e. the files on the cdrom,
> and that overlay file in a ram-based fs) can be released/deleted/unmounted.
> Thus you stop suffering the penalty of that overlay eating up your ram, and you
> are free to eject the cdrom.
> Now, for the sake of simplicity, we will assume that "mdadm /dev/md7 --grow
> --size=max" will actually work, and the 3.5G ext3fs that got migrated to your
> 100G /dev/sda1 partition, can be grown with resize2fs's ability to online expand
> a live ext3 filesystem. (technically this does not yet work, so refer to all
> those nasty workarounds).
There is my incomplete 2am reply ;-)
[Date Prev][Date Next] [Thread Prev][Thread Next]