[libvirt] [libvirt-users] Libvirtd running as root tries to access oneadmin (OpenNebula) NFS mount but throws: error: can’t canonicalize path

TomK tk at mdevsys.com
Thu Apr 14 05:24:04 UTC 2016


On 4/14/2016 1:01 AM, TomK wrote:
> On 4/13/2016 10:00 AM, John Ferlan wrote:
>>
>> On 04/13/2016 09:23 AM, TomK wrote:
>>> On 4/13/2016 1:33 AM, Martin Kletzander wrote:
>>>> On Tue, Apr 12, 2016 at 06:24:16PM -0400, TomK wrote:
>>>>> On 4/12/2016 5:08 PM, John Ferlan wrote:
>>>>>> Having/using a root squash via an NFS pool is "easy" (famous last
>>>>>> words)
>>>>>>
>>>>>> Create some pool XML (taking the example I have)
>>>>>>
>>>>>> % cat nfs.xml
>>>>>> <pool type='netfs'>
>>>>>>       <name>rootsquash</name>
>>>>>>       <source>
>>>>>>           <host name='localhost'/>
>>>>>>           <dir path='/home/bzs/rootsquash/nfs'/>
>>>>>>           <format type='nfs'/>
>>>>>>       </source>
>>>>>>       <target>
>>>>>> <path>/tmp/netfs-rootsquash-pool</path>
>>>>>>           <permissions>
>>>>>>               <mode>0755</mode>
>>>>>>               <owner>107</owner>
>>>>>>               <group>107</group>
>>>>>>           </permissions>
>>>>>>       </target>
>>>>>> </pool>
>>>>>>
>>>>>> In this case 107:107 is qemu:qemu and I used 'localhost' as the
>>>>>> hostname, but that can be a fqdn or ip-addr to the NFS server.
>>>>>>
>>>>>> You've already seen my /etc/exports
>>>>>>
>>>>>> virsh pool-define nfs.xml
>>>>>> virsh pool-build rootsquash
>>>>>> virsh pool-start rootsquash
>>>>>> virsh vol-list rootsquash
>>>>>>
>>>>>> Now instead of
>>>>>>
>>>>>>      <disk type='file' device='disk'>
>>>>>>        <source file='/var/lib/one//datastores/0/38/disk.0'/>
>>>>>>        <target dev='hda'/>
>>>>>>        <driver name='qemu' type='qcow2' cache='none'/>
>>>>>>      </disk>
>>>>>>
>>>>>> Something like:
>>>>>>
>>>>>>     <disk type='volume' device='disk'>
>>>>>>       <driver name='qemu' type='qemu' cache='none'/>
>>>>>>       <source pool='rootsquash' volume='disk.0'/>
>>>>>>       <target dev='hda'/>
>>>>>>     </disk>
>>>>>>
>>>>>> The volume name may be off, but it's perhaps close.  I forget how 
>>>>>> to do
>>>>>> the readonly bit for a pool (again, my focus is elsewhere).
>>>>>>
>>>>>> Of course you'd have to adjust the nfs.xml above to suit your
>>>>>> environment and see what you see/get.  The privileges for the 
>>>>>> pool and
>>>>>> volumes in the pool become the key to how libvirt decides to 
>>>>>> "request
>>>>>> access" to the volume.  "disk.1" having read access is probably 
>>>>>> not an
>>>>>> issue since you seem to be using it as a CDROM; however, "disk.0" is
>>>>>> going to be used for read/write - thus would have to be 
>>>>>> appropriately
>>>>>> configured...
>>>>>>
>>>>> Thanks John!  Appreciated again.
>>>>>
>>>>> No worries, handle what's on the plate now and earmark this for 
>>>>> checking
>>>>> once you have some free cycles.  I can temporarily hop on one leg by
>>>>> using Martin Kletzander's workaround (It's a POC at the moment).
>>>>>
>>>>> I'll have a look at your instructions further but wanted to find 
>>>>> out if
>>>>> that config nfs.xml is a one time thing correct?  I'm spinning 
>>>>> these up
>>>>> at will via the OpenNebula GUI and if I have update for each VM, that
>>>>> breaks the Cloud provisioning.  I'll go over your notes again. I'm
>>>>> optimistic.   :)
>>>>>
>>>> The more I'm thinking about it, the more I am convinced that the
>>>> workaround is actually not a workaround.  The only thing you need 
>>>> to do
>>>> is having execute for others (precisely for 'nobody' on the nfs share)
>>>> in the whole path on all directories.  Without that even the pool 
>>>> won't
>>>> be usable from libvirt.  However it does not pose any security 
>>>> issue as
>>>> it only allows others to check the path.  When qemu is launched, it 
>>>> has
>>>> the proper "label", meaning uid:gid to access the file so it will be
>>>> able to read/write or whatever permissions you set there. It's just
>>>> that libvirt does some checks that the path exists for example.
>>>>
>>>> Hope that's understandable and it will resolve your issue permanently.
>>>>
>>>> Have a nice day,
>>>> Martin
>>>>
>>>>
>>>> _______________________________________________
>>>> libvirt-users mailing list
>>>> libvirt-users at redhat.com
>>>> https://www.redhat.com/mailman/listinfo/libvirt-users
>>> The only reason I said that this might be a 'workaround' is due to John
>>> Farlan commenting that he'll look at this later on.  Ideally the
>>> opennebula community keeps the other permissions to nill and presumably
>>> they work on NFSv3 per the forum topic I included earlier from them.
>>> But if setting the permissions on nobody to allow for the 
>>> functionality,
>>> I would be comfortable with that.
>>>
>> Martin and I were taking different paths...  But yes, it certainly makes
>> sense given your error message about canonical path and the need for
>> eXecute permissions... I think I started wondering about that first, but
>> then jumped into the NFS pool because that's what my reference point is
>> for root-squash. Since root squash essentially sends root requests as
>> "nfsnobody" (IOW: others not the user or group), then the "o+x" approach
>> is the solution if you're going directly at the file.
>>
>> John
>
> Yes, appears the o+x is the only way right now.  It definitely tries 
> to access the share as root though, on CentOS 7 since I also tried to 
> add nfsnobody and nobody to the oneadmin group and that did not work 
> either.  Seems OpenNebula doesn't have this issue with NFSv3 running 
> on Ubuntu:
>
> [root at mdskvm-p01 ~]# rmdir /tmp/netfs-rootsquash-pool
> [root at mdskvm-p01 ~]# cat nfs.xml
> <pool type='netfs'>
>      <name>rootsquash</name>
>      <source>
>          <host name='opennebula01'/>
>          <dir path='/var/lib/one'/>
>          <format type='nfs'/>
>      </source>
>      <target>
>          <path>/tmp/netfs-rootsquash-pool</path>
>          <permissions>
>              <mode>0755</mode>
>              <owner>9869</owner>
>              <group>9869</group>
>          </permissions>
>      </target>
> </pool>
> [root at mdskvm-p01 ~]#
> [root at mdskvm-p01 ~]#
>
> [root at mdskvm-p01 ~]# virsh pool-define nfs.xml
> Pool rootsquash defined from nfs.xml
>
> [root at mdskvm-p01 ~]# virsh pool-build rootsquash
> Pool rootsquash built
>
> [root at mdskvm-p01 ~]# virsh pool-start rootsquash
> error: Failed to start pool rootsquash
> error: cannot open path '/tmp/netfs-rootsquash-pool': Permission denied
>
> [root at mdskvm-p01 ~]# virsh vol-list rootsquash
> error: Failed to list volumes
> error: Requested operation is not valid: storage pool 'rootsquash' is 
> not active
>
> [root at mdskvm-p01 ~]# ls -altri /tmp/netfs-rootsquash-pool
> total 4
>      133 drwxrwxrwt. 14 root     root     4096 Apr 14 00:05 ..
> 68785924 drwxr-xr-x   2 oneadmin oneadmin    6 Apr 14 00:05 .
> [root at mdskvm-p01 ~]#
>
> [root at mdskvm-p01 ~]# id oneadmin
> uid=9869(oneadmin) gid=9869(oneadmin) 
> groups=9869(oneadmin),992(libvirt),36(kvm)
> [root at mdskvm-p01 ~]# id nobody
> uid=99(nobody) gid=99(nobody) groups=99(nobody),9869(oneadmin)
> [root at mdskvm-p01 ~]# id nfsnobody
> uid=65534(nfsnobody) gid=65534(nfsnobody) 
> groups=65534(nfsnobody),9869(oneadmin)
> [root at mdskvm-p01 ~]# id root
> uid=0(root) gid=0(root) groups=0(root)
> [root at mdskvm-p01 ~]#
>
> [root at mdskvm-p01 ~]# ps -ef|grep -i libvirtd
> root       352 31058  0 00:31 pts/1    00:00:00 grep --color=auto -i 
> libvirtd
> root      1459     1  0 Apr11 ?        00:07:40 /usr/sbin/libvirtd 
> --listen --config /etc/libvirt/libvirtd.conf
> [root at mdskvm-p01 ~]#
>
>
>
> [root at mdskvm-p01 ~]# umount /var/lib/one
> [root at mdskvm-p01 ~]# mount --no-canonicalize /var/lib/one
> [root at mdskvm-p01 ~]# umount /var/lib/one
> [root at mdskvm-p01 ~]# mount /var/lib/one
> [root at mdskvm-p01 ~]# mount|tail -n 1
> 192.168.0.70:/var/lib/one on /var/lib/one type nfs4 
> (rw,relatime,vers=4.0,rsize=8192,wsize=8192,namlen=255,soft,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.0.60,local_lock=none,addr=192.168.0.70)
> [root at mdskvm-p01 ~]# umount /var/lib/one
> [root at mdskvm-p01 ~]# mount --no-canonicalize /var/lib/one
> [root at mdskvm-p01 ~]# mount|tail -n 1
> 192.168.0.70:/var/lib/one on /var/lib/one type nfs4 
> (rw,relatime,vers=4.0,rsize=8192,wsize=8192,namlen=255,soft,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.0.60,local_lock=none,addr=192.168.0.70)
> [root at mdskvm-p01 ~]# su - oneadmin
> Last login: Thu Apr 14 00:27:59 EDT 2016 on pts/0
> [oneadmin at mdskvm-p01 ~]$ virsh -d 1 --connect qemu:///system create 
> /var/lib/one//datastores/0/47/deployment.0
> create: file(optdata): /var/lib/one//datastores/0/47/deployment.0
> error: Failed to create domain from 
> /var/lib/one//datastores/0/47/deployment.0
> error: can't canonicalize path '/var/lib/one//datastores/0/47/disk.1': 
> Permission denied
> [oneadmin at mdskvm-p01 ~]$
>
>
>
>
> CONTROLLER ( NFS Server )
>
> [oneadmin at opennebula01 one]$ ls -ld 
> /var{,/lib{,/one{,/datastores{,/0{,/47{,/disk.1}}}}}}
> drwxr-xr-x. 19 root     root       4096 Apr  4 21:26 /var
> drwxr-xr-x. 28 root     root       4096 Apr 13 03:30 /var/lib
> drwxr-x---. 12 oneadmin oneadmin   4096 Apr 14 00:40 /var/lib/one
> drwxrwxr-x   6 oneadmin oneadmin     46 Mar 31 02:44 
> /var/lib/one/datastores
> drwxrwxr-x   8 oneadmin oneadmin     60 Apr 13 23:31 
> /var/lib/one/datastores/0
> drwxrwxr-x   2 oneadmin oneadmin     68 Apr 13 23:32 
> /var/lib/one/datastores/0/47
> -rw-r--r--   1 oneadmin oneadmin 372736 Apr 13 23:32 
> /var/lib/one/datastores/0/47/disk.1
> [oneadmin at opennebula01 one]$
>
>
>
> NODE ( NFS Client )
>
> [oneadmin at mdskvm-p01 ~]$ ls -ld 
> /var{,/lib{,/one{,/datastores{,/0{,/47{,/disk.1}}}}}}
> drwxr-xr-x. 21 root     root       4096 Apr 11 07:10 /var
> drwxr-xr-x. 45 root     root       4096 Apr 13 04:11 /var/lib
> drwxr-x---  12 oneadmin oneadmin   4096 Apr 14 00:39 /var/lib/one
> drwxrwxr-x   6 oneadmin oneadmin     46 Mar 31 02:44 
> /var/lib/one/datastores
> drwxrwxr-x   8 oneadmin oneadmin     60 Apr 13 23:31 
> /var/lib/one/datastores/0
> drwxrwxr-x   2 oneadmin oneadmin     68 Apr 13 23:32 
> /var/lib/one/datastores/0/47
> -rw-r--r--   1 oneadmin oneadmin 372736 Apr 13 23:32 
> /var/lib/one/datastores/0/47/disk.1
> [oneadmin at mdskvm-p01 ~]$
>
>
>
> Cheers,
> Tom K.
> ------------------------------------------------------------------------------------- 
>
> Living on earth is expensive, but it includes a free trip around the sun.
>
> _______________________________________________
> libvirt-users mailing list
> libvirt-users at redhat.com
> https://www.redhat.com/mailman/listinfo/libvirt-users

+ opennebula runs this as oneadmin:

Wed Apr 13 23:32:40 2016 [Z0][VMM][D]: Message received: LOG I 47 + echo 
'Running as user oneadmin'
Wed Apr 13 23:32:40 2016 [Z0][VMM][D]: Message received: LOG I 47 ++ 
virsh --connect qemu:///system create 
/var/lib/one//datastores/0/47/deployment.0
Wed Apr 13 23:32:40 2016 [Z0][VMM][D]: Message received: LOG I 47 error: 
Failed to create domain from /var/lib/one//datastores/0/47/deployment.0
Wed Apr 13 23:32:40 2016 [Z0][VMM][D]: Message received: LOG I 47 error: 
can't canonicalize path '/var/lib/one//datastores/0/47/disk.1': 
Permission denied

Cheers,
TK




More information about the libvir-list mailing list