[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191120102856.7e01e2e2@x1.home>
Date: Wed, 20 Nov 2019 10:28:56 -0700
From: Alex Williamson <alex.williamson@...hat.com>
To: Jason Gunthorpe <jgg@...pe.ca>
Cc: Jason Wang <jasowang@...hat.com>,
"Michael S. Tsirkin" <mst@...hat.com>,
Parav Pandit <parav@...lanox.com>,
Jeff Kirsher <jeffrey.t.kirsher@...el.com>,
davem@...emloft.net, gregkh@...uxfoundation.org,
Dave Ertman <david.m.ertman@...el.com>, netdev@...r.kernel.org,
linux-rdma@...r.kernel.org, nhorman@...hat.com,
sassmann@...hat.com, Kiran Patil <kiran.patil@...el.com>,
Tiwei Bie <tiwei.bie@...el.com>
Subject: Re: [net-next v2 1/1] virtual-bus: Implementation of Virtual Bus
On Wed, 20 Nov 2019 09:38:35 -0400
Jason Gunthorpe <jgg@...pe.ca> wrote:
> On Tue, Nov 19, 2019 at 10:59:20PM -0500, Jason Wang wrote:
>
> > > > The interface between vfio and userspace is
> > > > based on virtio which is IMHO much better than
> > > > a vendor specific one. userspace stays vendor agnostic.
> > >
> > > Why is that even a good thing? It is much easier to provide drivers
> > > via qemu/etc in user space then it is to make kernel upgrades. We've
> > > learned this lesson many times.
> >
> > For upgrades, since we had a unified interface. It could be done
> > through:
> >
> > 1) switch the datapath from hardware to software (e.g vhost)
> > 2) unload and load the driver
> > 3) switch teh datapath back
> >
> > Having drivers in user space have other issues, there're a lot of
> > customers want to stick to kernel drivers.
>
> So you want to support upgrade of kernel modules, but runtime
> upgrading the userspace part is impossible? Seems very strange to me.
>
> > > This is why we have had the philosophy that if it doesn't need to be
> > > in the kernel it should be in userspace.
> >
> > Let me clarify again. For this framework, it aims to support both
> > kernel driver and userspce driver. For this series, it only contains
> > the kernel driver part. What it did is to allow kernel virtio driver
> > to control vDPA devices. Then we can provide a unified interface for
> > all of the VM, containers and bare metal. For this use case, I don't
> > see a way to leave the driver in userspace other than injecting
> > traffic back through vhost/TAP which is ugly.
>
> Binding to the other kernel virtio drivers is a reasonable
> justification, but none of this comes through in the patch cover
> letters or patch commit messages.
>
> > > > That has lots of security and portability implications and isn't
> > > > appropriate for everyone.
> > >
> > > This is already using vfio. It doesn't make sense to claim that using
> > > vfio properly is somehow less secure or less portable.
> > >
> > > What I find particularly ugly is that this 'IFC VF NIC' driver
> > > pretends to be a mediated vfio device, but actually bypasses all the
> > > mediated device ops for managing dma security and just directly plugs
> > > the system IOMMU for the underlying PCI device into vfio.
> >
> > Well, VFIO have multiple types of API. The design is to stick the VFIO
> > DMA model like container work for making DMA API work for userspace
> > driver.
>
> Well, it doesn't, that model, for security, is predicated on vfio
> being the exclusive owner of the device. For instance if the kernel
> driver were to perform DMA as well then security would be lost.
>
> > > I suppose this little hack is what is motivating this abuse of vfio in
> > > the first place?
> > >
> > > Frankly I think a kernel driver touching a PCI function for which vfio
> > > is now controlling the system iommu for is a violation of the security
> > > model, and I'm very surprised AlexW didn't NAK this idea.
> > >
> > > Perhaps it is because none of the patches actually describe how the
> > > DMA security model for this so-called mediated device works? :(
> > >
> > > Or perhaps it is because this submission is split up so much it is
> > > hard to see what is being proposed? (I note this IFC driver is the
> > > first user of the mdev_set_iommu_device() function)
> >
> > Are you objecting the mdev_set_iommu_deivce() stuffs here?
>
> I'm questioning if it fits the vfio PCI device security model, yes.
The mdev IOMMU backing device model is for when an mdev device has
IOMMU based isolation, either via the PCI requester ID or via requester
ID + PASID. For example, an SR-IOV VF may be used by a vendor to
provide IOMMU based translation and isolation, but the VF may not be
complete otherwise to provide a self contained device. It might
require explicit coordination and interaction with the PF driver, ie.
mediation. The IOMMU backing device is certainly not meant to share an
IOMMU address space with host drivers, except as necessary for the
mediation of the device. The vfio model manages the IOMMU domain of
the backing device exclusively, any attempt to dual-host the device
respective to the IOMMU should fault in the dma/iommu-ops. Thanks,
Alex
Powered by blists - more mailing lists