[dm-devel] Re: [RFD] BIO_RW_BARRIER - what it means for devices, filesystems, and dm/md.

Stefan Bader Stefan.Bader at de.ibm.com
Wed May 30 09:12:37 UTC 2007


> > in-flight I/O to go to zero?
>
> Something like that is needed for some dm targets to support barriers.
> (We needn't always wait for *all* in-flight I/O.)
> When faced with -EOPNOTSUP, do all callers fall back to a sync in
> the places a barrier would have been used, or are there any more
> sophisticated strategies attempting to optimise code without barriers?
>
If I didn't misunderstand the idea is that no caller will face an
-EOPNOTSUPP in future. IOW every layer or driver somehow makes sure
the right thing happens.

>
> An efficient I/O barrier implementation would not normally involve
> flushing AFAIK: dm surely wouldn't "cause" a higher layer to assume
> stronger semantics than are provided.
>
Seems there are at least two assumptions about what the semantics
exactly _are_. Based on Documentation/block/barriers.txt I understand
a barrier implies ordering and flushing.
But regardless of that, assume the (admittedly constructed) following case:

You got a linear target that consists of two disks. One drive (a)
supports barriers and the other one (b) doesn't. Device-mapper just
maps the requests to the appropriate disk. Now the following sequence
happens:

1. block x gets mapped to drive b
2. block y (with barrier) gets mapped to drive a

Since drive a supports barrier request we don't get -EOPNOTSUPP but
the request with block y might get written before block x since the
disk are independent. I guess the chances of this are quite low since
at some point a barrier request will also hit drive b but for the time
being it might be better to indicate -EOPNOTSUPP right from
device-mapper.

Stefan




More information about the dm-devel mailing list