lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Mon, 1 Jul 2024 09:47:44 +0800
From: Yan Zhao <yan.y.zhao@...el.com>
To: Yi Liu <yi.l.liu@...el.com>
CC: Jason Gunthorpe <jgg@...dia.com>, "Tian, Kevin" <kevin.tian@...el.com>,
	"kvm@...r.kernel.org" <kvm@...r.kernel.org>, "linux-kernel@...r.kernel.org"
	<linux-kernel@...r.kernel.org>, "alex.williamson@...hat.com"
	<alex.williamson@...hat.com>, "peterx@...hat.com" <peterx@...hat.com>,
	"ajones@...tanamicro.com" <ajones@...tanamicro.com>
Subject: Re: [PATCH] vfio: Reuse file f_inode as vfio device inode

On Sun, Jun 30, 2024 at 03:06:05PM +0800, Yi Liu wrote:
> On 2024/6/28 23:28, Yan Zhao wrote:
> > On Fri, Jun 28, 2024 at 05:48:11PM +0800, Yi Liu wrote:
> > > On 2024/6/28 13:21, Yan Zhao wrote:
> > > > On Thu, Jun 27, 2024 at 09:42:09AM -0300, Jason Gunthorpe wrote:
> > > > > On Thu, Jun 27, 2024 at 05:51:01PM +0800, Yan Zhao wrote:
> > > > > 
> > > > > > > > > This doesn't seem right.. There is only one device but multiple file
> > > > > > > > > can be opened on that device.
> > > > > > Maybe we can put this assignment to vfio_df_ioctl_bind_iommufd() after
> > > > > > vfio_df_open() makes sure device->open_count is 1.
> > > > > 
> > > > > Yeah, that seems better.
> > > > > 
> > > > > Logically it would be best if all places set the inode once the
> > > > > inode/FD has been made to be the one and only way to access it.
> > > > For group path, I'm afraid there's no such a place ensuring only one active fd
> > > > in kernel.
> > > > I tried modifying QEMU to allow two openings and two assignments of the same
> > > > device. It works and appears to guest that there were 2 devices, though this
> > > > ultimately leads to device malfunctions in guest.
> > > > 
> > > > > > BTW, in group path, what's the benefit of allowing multiple open of device?
> > > > > 
> > > > > I don't know, the thing that opened the first FD can just dup it, no
> > > > > idea why two different FDs would be useful. It is something we removed
> > > > > in the cdev flow
> > > > > 
> > > > Thanks. However, from the code, it reads like a drawback of the cdev flow :)
> > > > I don't understand why the group path is secure though.
> > > > 
> > > >           /*
> > > >            * Only the group path allows the device to be opened multiple
> > > >            * times.  The device cdev path doesn't have a secure way for it.
> > > >            */
> > > >           if (device->open_count != 0 && !df->group)
> > > >                   return -EINVAL;
> > > > 
> > > > 
> > > 
> > > The group path only allow single group open, so the device FDs retrieved
> > > via the group is just within the opener of the group. This secure is built
> > > on top of single open of group.
> > What if the group is opened for only once but VFIO_GROUP_GET_DEVICE_FD
> > ioctl is called for multiple times?
> 
> this should happen within the process context that has opened the group. it
> should be safe, and that would be tracked by the open_count.
Thanks for explanation.

Even within a single process, for the group path, it appears that accesses to
the multiple opened device fds still require proper synchronization.
With proper synchronizations, for cdev path, accesses from different processes
can still function correctly.
Additionally, the group fd can also be passed to another process, allowing
device fds to be acquired and accessed from a different process.

On the other hand, cdev path might also support multiple opened fds from a
single process by checking task gid.

The device cdev path simply opts not to do that because it is unnecessary, right?


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ