lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1ea2fa65-650e-bb09-f9c6-361dfd9b0b77@redhat.com>
Date:   Mon, 25 Nov 2019 20:59:42 +0800
From:   Jason Wang <jasowang@...hat.com>
To:     Jason Gunthorpe <jgg@...pe.ca>, Tiwei Bie <tiwei.bie@...el.com>
Cc:     Alex Williamson <alex.williamson@...hat.com>,
        "Michael S. Tsirkin" <mst@...hat.com>,
        Parav Pandit <parav@...lanox.com>,
        Jeff Kirsher <jeffrey.t.kirsher@...el.com>,
        davem@...emloft.net, gregkh@...uxfoundation.org,
        Dave Ertman <david.m.ertman@...el.com>, netdev@...r.kernel.org,
        linux-rdma@...r.kernel.org, nhorman@...hat.com,
        sassmann@...hat.com, Kiran Patil <kiran.patil@...el.com>
Subject: Re: [net-next v2 1/1] virtual-bus: Implementation of Virtual Bus


On 2019/11/25 上午8:09, Jason Gunthorpe wrote:
> On Sun, Nov 24, 2019 at 10:51:24PM +0800, Tiwei Bie wrote:
>
>>>> You removed JasonW's other reply in above quote. He said it clearly
>>>> that we do want/need to assign parts of device BAR to the VM.
>>> Generally we don't look at patches based on stuff that isn't in them.
>> The hardware is ready, and it's something really necessary (for
>> the performance). It was planned to be added immediately after
>> current series. If you want, it certainly can be included right now.
> I don't think it makes a significant difference, there are enough
> reasons already that this does not belong in vfio. Both Greg and I
> already were very against using mdev as an alterative to the driver
> core.


Don't get us wrong, in v13 this is what Greg said [1].

"
Also, see the other conversations we are having about a "virtual" bus
and devices.  I do not want to have two different ways of doing the same
thing in the kernel at the same time please.  Please work together with
the Intel developers to solve this in a unified way, as you both
need/want the same thing here.

Neither this, nor the other proposal can be accepted until you all agree
on the design and implementation.
"

[1] https://lkml.org/lkml/2019/11/18/521


>
>>>> IIUC, your point is to suggest us invent new DMA API for userspace to
>>>> use instead of leveraging VFIO's well defined DMA API. Even if we don't
>>>> use VFIO at all, I would imagine it could be very VFIO-like (e.g. caps
>>>> for BAR + container/group for DMA) eventually.
>>> None of the other user dma subsystems seem to have the problems you
>>> are imagining here. Perhaps you should try it first?
>> Actually VFIO DMA API wasn't used at the beginning of vhost-mdev. But
>> after the discussion in upstream during the RFC stage since the last
>> year, the conclusion is that leveraging VFIO's existing DMA API would
>> be the better choice and then vhost-mdev switched to that direction.
> Well, unfortunately, I think that discussion may have led you
> wrong. Do you have a link? Did you post an ICF driver that didn't use vfio?


Why do you think the driver posted in [2] use vfio?

[2] https://lkml.org/lkml/2019/11/21/479

Thanks


>
> Jason
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ