lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 22 Nov 2019 16:45:38 +0800
From:   Jason Wang <jasowang@...hat.com>
To:     Jason Gunthorpe <jgg@...pe.ca>
Cc:     Alex Williamson <alex.williamson@...hat.com>,
        "Michael S. Tsirkin" <mst@...hat.com>,
        Parav Pandit <parav@...lanox.com>,
        Jeff Kirsher <jeffrey.t.kirsher@...el.com>,
        davem@...emloft.net, gregkh@...uxfoundation.org,
        Dave Ertman <david.m.ertman@...el.com>, netdev@...r.kernel.org,
        linux-rdma@...r.kernel.org, nhorman@...hat.com,
        sassmann@...hat.com, Kiran Patil <kiran.patil@...el.com>,
        Tiwei Bie <tiwei.bie@...el.com>
Subject: Re: [net-next v2 1/1] virtual-bus: Implementation of Virtual Bus


On 2019/11/21 下午10:17, Jason Gunthorpe wrote:
> On Thu, Nov 21, 2019 at 03:21:29PM +0800, Jason Wang wrote:
>>> The role of vfio has traditionally been around secure device
>>> assignment of a HW resource to a VM. I'm not totally clear on what the
>>> role if mdev is seen to be, but all the mdev drivers in the tree seem
>>> to make 'and pass it to KVM' a big part of their description.
>>>
>>> So, looking at the virtio patches, I see some intended use is to map
>>> some BAR pages into the VM.
>> Nope, at least not for the current stage. It still depends on the
>> virtio-net-pci emulatio in qemu to work. In the future, we will allow such
>> mapping only for dorbell.
> There has been a lot of emails today, but I think this is the main
> point I want to respond to.
>
> Using vfio when you don't even assign any part of the device BAR to
> the VM is, frankly, a gigantic misuse, IMHO.


That's not a compelling point. If you go through the discussion on 
vhost-mdev from last year, the direct mapping of doorbell is accounted 
since that time[1]. It works since its stateless. Having an arbitrary 
BAR to be mapped directly to VM may cause lots of troubles for migration 
since it requires a vendor specific way to get the state of the device. 
I don't think having a vendor specific migration driver is acceptable in 
qemu. What's more, direct mapping through MMIO is even optional (see 
CONFIG_VFIO_PCI_MMAP) and vfio support buses without any MMIO region.


>
> Just needing userspace DMA is not, in any way, a justification to use
> vfio.
>
> We have extensive library interfaces in the kernel to do userspace DMA
> and subsystems like GPU and RDMA are full of example uses of this kind
> of stuff.


I'm not sure which library did you mean here. Is any of those library 
used by qemu? If not, what's the reason?

For virtio, we need a device agnostic API which supports migration. Is 
that something the library you mention here can provide?


>   Everything from on-device IOMMU to system IOMMU to PASID. If
> you find things missing then we need to improve those library
> interfaces, not further abuse VFIO.
>
> Further, I do not think it is wise to design the userspace ABI around
> a simplistict implementation that can't do BAR assignment,


Again, the vhost-mdev follow the VFIO ABI, no new ABI is invented, and 
mmap() was kept their for mapping device regions.


> and can't
> support multiple virtio rings on single PCI function.


How do you know multiple virtio rings can't be supported? It should be 
address at the level of parent devices not virtio-mdev framework, no?


> This stuff is
> clearly too premature.


It depends on how mature you want. All the above two points looks 
invalid to me.


>
> My advice is to proceed as a proper subsystem with your own chardev,
> own bus type, etc and maybe live in staging for a bit until 2-3
> drivers are implementing the ABI (or at the very least agreeing with),
> as is the typical process for Linux.


I'm open to comments for sure, but looking at all the requirement for 
vDPA, most of the requirement could be settled through existed modules, 
that's not only a simplification for developing but also for management 
layer or userspace drivers.


>
> Building a new kernel ABI is hard (this is why I advised to use a
> userspace driver).


Well, it looks to me my clarification is ignored several times. There's 
no new ABI invented in the series, no?


> It has to go through the community process at the
> usual pace.


What do you mean by "usual pace"? It has been more than 1.5 year since 
the first version of vhost-mdev [1] that was posted on the list.

[1] https://lwn.net/Articles/750770/

Thanks

>
> Jason
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ