lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 21 Nov 2019 14:59:51 +0800
From:   Jason Wang <jasowang@...hat.com>
To:     Jason Gunthorpe <jgg@...pe.ca>,
        Alex Williamson <alex.williamson@...hat.com>
Cc:     "Michael S. Tsirkin" <mst@...hat.com>,
        Parav Pandit <parav@...lanox.com>,
        Jeff Kirsher <jeffrey.t.kirsher@...el.com>,
        davem@...emloft.net, gregkh@...uxfoundation.org,
        Dave Ertman <david.m.ertman@...el.com>, netdev@...r.kernel.org,
        linux-rdma@...r.kernel.org, nhorman@...hat.com,
        sassmann@...hat.com, Kiran Patil <kiran.patil@...el.com>,
        Tiwei Bie <tiwei.bie@...el.com>
Subject: Re: [net-next v2 1/1] virtual-bus: Implementation of Virtual Bus


On 2019/11/21 上午2:11, Jason Gunthorpe wrote:
> On Wed, Nov 20, 2019 at 10:28:56AM -0700, Alex Williamson wrote:
>>>> Are you objecting the mdev_set_iommu_deivce() stuffs here?
>>> I'm questioning if it fits the vfio PCI device security model, yes.
>> The mdev IOMMU backing device model is for when an mdev device has
>> IOMMU based isolation, either via the PCI requester ID or via requester
>> ID + PASID.  For example, an SR-IOV VF may be used by a vendor to
>> provide IOMMU based translation and isolation, but the VF may not be
>> complete otherwise to provide a self contained device.  It might
>> require explicit coordination and interaction with the PF driver, ie.
>> mediation.
> In this case the PF does not look to be involved, the ICF kernel
> driver is only manipulating registers in the same VF that the vfio
> owns the IOMMU for.
>
> This is why I keep calling it a "so-called mediated device" because it
> is absolutely not clear what the kernel driver is mediating.


It tries to do mediation between virtio commands and real device. It 
works similar to mdev PCI device that do mediation between PCI commands 
and real device. This is exact what mediator pattern[1] did, no?

[1] https://en.wikipedia.org/wiki/Mediator_pattern


> Nearly
> all its work is providing a subsystem-style IOCTL interface under the
> existing vfio multiplexer unrelated to vfio requirements for DMA.


What do you mean by "unrelated to vfio", the ioctl() interface belongs 
its device ops is pretty device specific. And for IFC VF driver, it 
doesn't see ioctl, it can only see virtio commands.


>
>> The IOMMU backing device is certainly not meant to share an IOMMU
>> address space with host drivers, except as necessary for the
>> mediation of the device.  The vfio model manages the IOMMU domain of
>> the backing device exclusively, any attempt to dual-host the device
>> respective to the IOMMU should fault in the dma/iommu-ops.  Thanks,
> Sounds more reasonable if the kernel dma_ops are prevented while vfio
> is using the device.
>
> However, to me it feels wrong that just because a driver wishes to use
> PASID or IOMMU features it should go through vfio and mediated
> devices.
>
> It is not even necessary as we have several examples already of
> drivers using these features without vfio.


Confused, are you suggesting a new module to support fine grain DMA 
isolation to userspace driver? How different would it looks compared 
with exist VFIO then?


>
> I feel like mdev is suffering from mission creep. I see people
> proposing to use mdev for many wild things, the Mellanox SF stuff in
> the other thread and this 'virtio subsystem' being the two that have
> come up publicly this month.
>
> Putting some boundaries on mdev usage would really help people know
> when to use it.


And forbid people to extend it? Do you agree that there are lots of 
common requirements between:

- mediation between virtio and real device
- mediation between PCI and real device

?


> My top two from this discussion would be:
>
> - mdev devices should only bind to vfio. It is not a general kernel
>    driver matcher mechanism. It is not 'virtual-bus'.


It's still unclear to me why mdev must bind to vfio. Though they are 
coupled but the pretty loosely. I would argue that any device that is 
doing mediation between drivers and device could be done through mdev. 
Bind mdev to vfio means you need invent other things to support kernel 
driver and your parent need to be prepared for those two different APIs. 
Mdev devices it self won't be a bus, but it could provide helpers to 
build a mediated bus.


>
> - mdev & vfio are not a substitute for a proper kernel subsystem. We
>    shouldn't export a complex subsystem-like ioctl API through
>    vfio ioctl extensions.


I would say though e.g the regions based VFIO device API looks generic, 
it carries device/bus specific information there. It would be rather 
simple to switch back to region API and build vhost protocol on top. But 
does it really have a lot of differences?


>   Make a proper subsystem, it is not so hard.


Vhost is the subsystem bu then how to abstract the DMA there? It would 
be more than 99% similar to VFIO then.

Thanks


>
> Maybe others agree?
>
> Thanks,
> Jason
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ