How nss ldap traffic is doing when the locking time out (default 30 seconds) in /proc/cluster/config/dlm/lock-timeout, when authentication happens from a user/group local to one of the nodes which gets it's KVM locked ?
On Fri, Mar 5, 2010 at 12:42 PM, Rudi Ahlers <Rudi softdux com>
On 2010/03/05 11:59 AM, Brett Cave wrote:
On Thu, Mar 4, 2010 at 4:23 AM, Jeff
Karpinski <jeff 3d0g net>
assigned Red Hat engineer was on-site today and pointed out the
blindingly obvious solution. Can't believe I didn't think of it: Run
NFS as a clustered service and have the VMs mount that. That way ANY
system - even outside of the cluster - can also access the data.
This is what we are doing, works great. We considered presenting the
raw devices from our SAN (fc connectivity instead of iSCSI) to the
VM's, but opted against it due to dynamics of changing # of VM's and
GFS requirements for journals / # of nodes, as well as multicast issues
(each dom0 uses a different routed network for VM's). Each VM mounts
NFS from its host.
What kind of security do you apply, both to the NFS cluster, and the
data that get accessed on it?
heya rudi, never realised u were on this list too ;)
the exports are controlled by source IP address in /etc/exports. The data on there is not sensitive data at all in our environment, and GFS is all server environment, with no user access... but I just tested using ACLs and it works 100% (added the acl option to gfs mount, and configured using setfacl). We are using ldap network authentication, so works nicely with group permissions ;)
(although we do have 1 luks volume image on the gfs filesystem that is mounted by one of the phy machines using a keyfile stored locally).
Linux-cluster mailing list
Linux-cluster redhat com