[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Cluster-devel] gfs uevent and sysfs changes



On Thu, Dec 4, 2008 at 4:07 PM, David Teigland <teigland redhat com> wrote:
> On Thu, Dec 04, 2008 at 01:32:31PM -0500, david m. richter wrote:
>> On Mon, Dec 1, 2008 at 12:31 PM, David Teigland <teigland redhat com> wrote:
>> > Here are the compatibility aspects to the recent ideas about changes to
>> > the user/kernel interface between gfs (1 & 2) and gfs_controld.
>> >
>> > . gfs_controld can remove id from hostdata string in mount options
>>
>> hi david,
>>
>> I know I'm a peripheral consumer of the cluster suite, but I thought
>> I'd chime in and say that I am currently using the "id" as passed into
>> the kernel in the hostdata string (I believe by mount.gfs2?) in my
>> pNFS work.  does the above "gfs_controld can remove id from hostdata
>> string" comment refer to something orthogonal, or would it affect what
>> gets stored in the superblock's hostdata at mount time?
>
> yes
>
>> ..hm, sorry, I don't have the code right in front of me, but is that
>> "id" in the hostdata string the same thing as the mountgroup id?  if
>> so, then my above worry about the hostdata string is moot, because if
>> gfs_controld still has that info I can just make a downcall.
>
> Yes, it's created in gfs_controld, and passed to mount.gfs via the
> hostdata string which is then passed into the kernel during mount(2).

ah, so just to make sure i'm with you here: (1) gfs_controld is
generating this "id"-which-is-the-mountgroup-id, and (2) gfs_kernel
will no longer receive this in the hostdata string, so (3) i can just
rip out my in-kernel hostdata-parsing gunk and instead send in the
mountgroup id on my own (i have my own up/downcall channel)?  if i've
got it right, then everything's a cinch and i'll shut up :)

say, one tangential question (i won't be offended if you skip it -
heh): is there a particular reason that you folks went with the uevent
mechanism for doing upcalls?  i'm just curious, given the
seeming-complexity and possible overhead of using the whole layered
netlink apparatus vs. something like Trond Myklebust's rpc_pipefs
(don't let the "rpc" fool you; it's a barebones, dead-simple pipe).
-- and no, i'm not selling anything :)  my boss was asking for a list
of differences between rpc_pipefs and uevents and the best i could
come up with is the former's bidirectional.  Trond mentioned the
netlink overhead and i wondered if that was actually a significant
factor or just lost in the noise in most cases.

thanks again,

  d
  .

> Previously, gfs-kernel (lock_dlm actually) would pass this id back up to
> gfs_controld within the plock op structures.  This was because plock ops
> for all gfs fs's were funnelled to gfs_controld through a single misc
> device.  gfs_controld would match the op to a particular fs using the id.
>
> The dlm does this now, using the lockspace id.
>
> Dave
>
>


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]