lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Thu, 27 Jun 2024 00:17:02 +0000
From: "Tian, Kevin" <kevin.tian@...el.com>
To: Jason Gunthorpe <jgg@...dia.com>, "Zhao, Yan Y" <yan.y.zhao@...el.com>
CC: "kvm@...r.kernel.org" <kvm@...r.kernel.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"alex.williamson@...hat.com" <alex.williamson@...hat.com>,
	"peterx@...hat.com" <peterx@...hat.com>, "ajones@...tanamicro.com"
	<ajones@...tanamicro.com>
Subject: RE: [PATCH] vfio: Reuse file f_inode as vfio device inode

> From: Tian, Kevin
> Sent: Thursday, June 27, 2024 7:56 AM
> 
> > From: Jason Gunthorpe <jgg@...dia.com>
> > Sent: Wednesday, June 26, 2024 9:35 PM
> >
> > On Mon, Jun 17, 2024 at 05:53:32PM +0800, Yan Zhao wrote:
> > > Reuse file f_inode as vfio device inode and associate pseudo path file
> > > directly to inode allocated in vfio fs.
> > >
> > > Currently, vfio device is opened via 2 ways:
> > > 1) via cdev open
> > >    vfio device is opened with a cdev device with file f_inode and address
> > >    space associated with a cdev inode;
> > > 2) via VFIO_GROUP_GET_DEVICE_FD ioctl
> > >    vfio device is opened via a pseudo path file with file f_inode and
> > >    address space associated with an inode in anon_inode_fs.
> > >
> > > In commit b7c5e64fecfa ("vfio: Create vfio_fs_type with inode per
> device"),
> > > an inode in vfio fs is allocated for each vfio device. However, this inode
> > > in vfio fs is only used to assign its address space to that of a file
> > > associated with another cdev inode or an inode in anon_inode_fs.
> > >
> > > This patch
> > > - reuses cdev device inode as the vfio device inode when it's opened via
> > >   cdev way;
> > > - allocates an inode in vfio fs, associate it to the pseudo path file,
> > >   and save it as the vfio device inode when the vfio device is opened via
> > >   VFIO_GROUP_GET_DEVICE_FD ioctl.
> > >
> > > File address space will then point automatically to the address space of
> > > the vfio device inode. Tools like unmap_mapping_range() can then zap all
> > > vmas associated with the vfio device.
> > >
> > > Signed-off-by: Yan Zhao <yan.y.zhao@...el.com>
> > > ---
> > >  drivers/vfio/device_cdev.c |  9 ++++---
> > >  drivers/vfio/group.c       | 21 ++--------------
> > >  drivers/vfio/vfio.h        |  2 ++
> > >  drivers/vfio/vfio_main.c   | 49 +++++++++++++++++++++++++++-----------
> > >  4 files changed, 43 insertions(+), 38 deletions(-)
> > >
> > > diff --git a/drivers/vfio/device_cdev.c b/drivers/vfio/device_cdev.c
> > > index bb1817bd4ff3..a4eec8e88f5c 100644
> > > --- a/drivers/vfio/device_cdev.c
> > > +++ b/drivers/vfio/device_cdev.c
> > > @@ -40,12 +40,11 @@ int vfio_device_fops_cdev_open(struct inode
> > *inode, struct file *filep)
> > >  	filep->private_data = df;
> > >
> > >  	/*
> > > -	 * Use the pseudo fs inode on the device to link all mmaps
> > > -	 * to the same address space, allowing us to unmap all vmas
> > > -	 * associated to this device using unmap_mapping_range().
> > > +	 * mmaps are linked to the address space of the inode of device cdev.
> > > +	 * Save the inode of device cdev in device->inode to allow
> > > +	 * unmap_mapping_range() to unmap all vmas.
> > >  	 */
> > > -	filep->f_mapping = device->inode->i_mapping;
> > > -
> > > +	device->inode = inode;
> >
> > This doesn't seem right.. There is only one device but multiple file
> > can be opened on that device.
> >
> > We expect every open file to have a unique inode otherwise the
> > unmap_mapping_range() will not function properly.
> >

Can you elaborate the reason of this expectation? If multiple open's
come from a same process then having them share a single address
space per device still makes sense. Are you considering a scenario
where a vfio device is opened by multiple processes? is it allowed?

btw Yan's patch appears to impose different behaviors between cdev
and group paths. For cdev all open files share the address space of
the cdev inode, same effect as sharing that of the vfio-fs inode today.
But for group open every open file will get a new inode which kind of
matches your expectation but the patch simply overrides
vfio_device->inode instead of tracking a list. That sound incomplete.

> 
> Does it mean that the existing code is already broken? there is only
> one vfio-fs inode per device (allocated at vfio_init_device()).
> 
> And if we expect unique inode per open file then there will be a list
> of inodes tracked under vfio_pci_core_device for unmap_mapping_range()
> but it's also not the case today:
> 
> static void vfio_pci_zap_bars(struct vfio_pci_core_device *vdev)
> {
> 	struct vfio_device *core_vdev = &vdev->vdev;
> 	loff_t start =
> VFIO_PCI_INDEX_TO_OFFSET(VFIO_PCI_BAR0_REGION_INDEX);
> 	loff_t end =
> VFIO_PCI_INDEX_TO_OFFSET(VFIO_PCI_ROM_REGION_INDEX);
> 	loff_t len = end - start;
> 
> 	unmap_mapping_range(core_vdev->inode->i_mapping, start, len,
> true);
> }

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ