[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [dm-devel] Another experimental dm target... an encryptiontarget



Am Sa, 2003-07-26 um 20.59 schrieb jon+lvm silicide dk:

> i sort of consider LVM part of the kernel too, because without each
> other, neither is anything.

Hmm? Works fine with only dmsetup.

> i have other kernel coding ideas if you're interested, some are more
> hardcore than others. They are not LVM related though, except that 
> one of them might be a dm-target, or a filesystem. That one is because
> i'm a little annoyed that it is cumbersome to add more disks to a
> system, and LVM is not the answer i'm looking for. What i want is:
> i have a number of disks, and i want to distribuate data across those,
> but the data must be replicated so i can handle a loss of one or more
> disks (user specified). Further more, i want to be able to add new
> disks to this blockdevice, and remove them. The filesystem grows
> and shrinks. It is sort of a raid5 device that you can add new disks
> into like you can with LVM, and then the system distribuates the data
> across the disks. This might take some time to rearrange the data,
> but the point is that the sysadm shouldnt worry about how to migrate
> data, he/she justs adds disks.

Whow. What you are suggesting is a raid5 target where you can
transparently add stripes and data gets reshuffled on fly. That's really
pretty hardcore. Does something like this exist at all? This reordering
would mean a lot of seeking and can be pretty dangerous when the process
gets interrupted. And I don't have a clue how a raid5 like system could
be implemented with more than one redundant disk. A simple xor operation
doesn't work and if other algorithms exist, I suppose they would be very
slow.

Anyway, the next thing I'll try to do is implementing something like a
raid5 target. First step would be a target that works similarly to the
striped target but writes additional parity information. I think that's
already fairly complicated because you have to somehow buffer the chunks
so that you can calculate parity information. The next step would then
be to be able to handle a faulty stripe and use that parity information
to reconstruct the missing data when someone tries to read data off a
chunk from the failed stripe.

The last thing would be the reconstruction process. And if you have that
you can also think of hot-spare disks or something.

One step after the other. In order to deal with a failed disk I would
perhaps investigate are more simple multipath target first.

But that's all pretty hardcore. But still worth experimenting with it.

> just because it is bigger doesnt mean it isnt easier, and besides, Joe could
> be mistaken ;-)

Either he or I. ;)

> i can understand that you were happy.

Thanks. :)

But you'll see. After understanding things in the kernel it suddenly
feels terribly simple. I think the kernel is pretty well designed, the
code is somewhat self-documenting. Ok, I've got a lot of coding
experience for a student, but not had too much to do with the kernel
except for following the LKML for some years now.

They only thing I had to do with the kernel before (except for fixing
rejects when trying to apply patches or adding PCI numbers or trivial
things like that) was writing a device driver for a small MP3 player
that can be plugged into the parallel port.  Things like handling reads,
writes and ioctls and writing an interrupt handler, accessing some ports
and doing some waitqueue handling. It's fascinating to see how these
things are implemented on the kernel side when you've only known the
client side before.

--
Christophe Saout <christophe saout de>
Please avoid sending me Word or PowerPoint attachments.
See http://www.fsf.org/philosophy/no-word-attachments.html




[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]