lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 20 Nov 2019 15:07:32 -0700
From:   Alex Williamson <alex.williamson@...hat.com>
To:     Jason Gunthorpe <jgg@...pe.ca>
Cc:     Jason Wang <jasowang@...hat.com>,
        "Michael S. Tsirkin" <mst@...hat.com>,
        Parav Pandit <parav@...lanox.com>,
        Jeff Kirsher <jeffrey.t.kirsher@...el.com>,
        davem@...emloft.net, gregkh@...uxfoundation.org,
        Dave Ertman <david.m.ertman@...el.com>, netdev@...r.kernel.org,
        linux-rdma@...r.kernel.org, nhorman@...hat.com,
        sassmann@...hat.com, Kiran Patil <kiran.patil@...el.com>,
        Tiwei Bie <tiwei.bie@...el.com>
Subject: Re: [net-next v2 1/1] virtual-bus: Implementation of Virtual Bus

On Wed, 20 Nov 2019 14:11:08 -0400
Jason Gunthorpe <jgg@...pe.ca> wrote:

> On Wed, Nov 20, 2019 at 10:28:56AM -0700, Alex Williamson wrote:
> > > > Are you objecting the mdev_set_iommu_deivce() stuffs here?    
> > > 
> > > I'm questioning if it fits the vfio PCI device security model, yes.  
> > 
> > The mdev IOMMU backing device model is for when an mdev device has
> > IOMMU based isolation, either via the PCI requester ID or via requester
> > ID + PASID.  For example, an SR-IOV VF may be used by a vendor to
> > provide IOMMU based translation and isolation, but the VF may not be
> > complete otherwise to provide a self contained device.  It might
> > require explicit coordination and interaction with the PF driver, ie.
> > mediation.    
> 
> In this case the PF does not look to be involved, the ICF kernel
> driver is only manipulating registers in the same VF that the vfio
> owns the IOMMU for.

The mdev_set_iommu_device() call is probably getting caught up in the
confusion of mdev as it exists today being vfio specific.  What I
described in my reply is vfio specific.  The vfio iommu backend is
currently the only code that calls mdev_get_iommu_device(), JasonW
doesn't use it in the virtio-mdev code, so this seems like a stray vfio
specific interface that's setup by IFC but never used.

> This is why I keep calling it a "so-called mediated device" because it
> is absolutely not clear what the kernel driver is mediating. Nearly
> all its work is providing a subsystem-style IOCTL interface under the
> existing vfio multiplexer unrelated to vfio requirements for DMA.

Names don't always evolve well to what an interface becomes, see for
example vfio.  However, even in the vfio sense of mediated devices we
have protocol translation.  The mdev vendor driver translates vfio API
callbacks into hardware specific interactions.  Is this really much
different?

> > The IOMMU backing device is certainly not meant to share an IOMMU
> > address space with host drivers, except as necessary for the
> > mediation of the device.  The vfio model manages the IOMMU domain of
> > the backing device exclusively, any attempt to dual-host the device
> > respective to the IOMMU should fault in the dma/iommu-ops.  Thanks,  
> 
> Sounds more reasonable if the kernel dma_ops are prevented while vfio
> is using the device.

AFAIK we can't mix DMA ops and IOMMU ops at the same time and the
domain information necessary for the latter is owned within the vfio
IOMMU backend.

> However, to me it feels wrong that just because a driver wishes to use
> PASID or IOMMU features it should go through vfio and mediated
> devices.

I don't think I said this.  IOMMU backing of an mdev is an acceleration
feature as far as vfio-mdev is concerned.  There are clearly other ways
to use the IOMMU.

> It is not even necessary as we have several examples already of
> drivers using these features without vfio.

Of course.

> I feel like mdev is suffering from mission creep. I see people
> proposing to use mdev for many wild things, the Mellanox SF stuff in
> the other thread and this 'virtio subsystem' being the two that have
> come up publicly this month.

Tell me about it... ;)
 
> Putting some boundaries on mdev usage would really help people know
> when to use it. My top two from this discussion would be:
> 
> - mdev devices should only bind to vfio. It is not a general kernel
>   driver matcher mechanism. It is not 'virtual-bus'.

I think this requires the driver-core knowledge to really appreciate.
Otherwise there's apparently a common need to create sub-devices and
without closer inspection of the bus:driver API contract, it's too easy
to try to abstract the device:driver API via the bus.  mdev already has
a notion that the device itself can use any API, but the interface to
the bus is the vendor provided, vfio compatible callbacks.

> - mdev & vfio are not a substitute for a proper kernel subsystem. We
>   shouldn't export a complex subsystem-like ioctl API through
>   vfio ioctl extensions. Make a proper subsystem, it is not so hard.

This is not as clear to me, is "ioctl" used once or twice too often or
are you describing a defined structure of callbacks as an ioctl API?
The vfio mdev interface is just an extension of the file descriptor
based vfio device API.  The device needs to handle actual ioctls, but
JasonW's virtio-mdev series had their own set of callbacks.  Maybe a
concrete example of this item would be helpful.  Thanks,

Alex

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ