lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 27 Nov 2019 18:58:58 +0800
From:   Jason Wang <jasowang@...hat.com>
To:     Martin Habets <mhabets@...arflare.com>,
        Parav Pandit <parav@...lanox.com>,
        Jeff Kirsher <jeffrey.t.kirsher@...el.com>,
        "davem@...emloft.net" <davem@...emloft.net>,
        "gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>
Cc:     Dave Ertman <david.m.ertman@...el.com>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "linux-rdma@...r.kernel.org" <linux-rdma@...r.kernel.org>,
        "nhorman@...hat.com" <nhorman@...hat.com>,
        "sassmann@...hat.com" <sassmann@...hat.com>,
        "jgg@...pe.ca" <jgg@...pe.ca>, Kiran Patil <kiran.patil@...el.com>,
        "Michael S. Tsirkin" <mst@...hat.com>,
        Alex Williamson <alex.williamson@...hat.com>,
        "Bie, Tiwei" <tiwei.bie@...el.com>
Subject: Re: [net-next v2 1/1] virtual-bus: Implementation of Virtual Bus


On 2019/11/26 下午8:26, Martin Habets wrote:
> On 22/11/2019 16:19, Parav Pandit wrote:
>>
>>> From: Jason Wang <jasowang@...hat.com>
>>> Sent: Friday, November 22, 2019 3:14 AM
>>>
>>> On 2019/11/21 下午11:10, Martin Habets wrote:
>>>> On 19/11/2019 04:08, Jason Wang wrote:
>>>>> On 2019/11/16 上午7:25, Parav Pandit wrote:
>>>>>> Hi Jeff,
>>>>>>
>>>>>>> From: Jeff Kirsher <jeffrey.t.kirsher@...el.com>
>>>>>>> Sent: Friday, November 15, 2019 4:34 PM
>>>>>>>
>>>>>>> From: Dave Ertman <david.m.ertman@...el.com>
>>>>>>>
>>>>>>> This is the initial implementation of the Virtual Bus,
>>>>>>> virtbus_device and virtbus_driver.  The virtual bus is a software
>>>>>>> based bus intended to support lightweight devices and drivers and
>>>>>>> provide matching between them and probing of the registered drivers.
>>>>>>>
>>>>>>> The primary purpose of the virual bus is to provide matching
>>>>>>> services and to pass the data pointer contained in the
>>>>>>> virtbus_device to the virtbus_driver during its probe call.  This
>>>>>>> will allow two separate kernel objects to match up and start
>>> communication.
>>>>>> It is fundamental to know that rdma device created by virtbus_driver will
>>> be anchored to which bus for an non abusive use.
>>>>>> virtbus or parent pci bus?
>>>>>> I asked this question in v1 version of this patch.
>>>>>>
>>>>>> Also since it says - 'to support lightweight devices', documenting that
>>> information is critical to avoid ambiguity.
>>>>>> Since for a while I am working on the subbus/subdev_bus/xbus/mdev [1]
>>> whatever we want to call it, it overlaps with your comment about 'to support
>>> lightweight devices'.
>>>>>> Hence let's make things crystal clear weather the purpose is 'only
>>> matching service' or also 'lightweight devices'.
>>>>>> If this is only matching service, lets please remove lightweight devices
>>> part..
>>>>> Yes, if it's matching + lightweight device, its function is almost a duplication
>>> of mdev. And I'm working on extending mdev[1] to be a generic module to
>>> support any types of virtual devices a while. The advantage of mdev is:
>>>>> 1) ready for the userspace driver (VFIO based)
>>>>> 2) have a sysfs/GUID based management interface
>>>> In my view this virtual-bus is more generic and more flexible than mdev.
>>>
>>> Even after the series [1] here?
> I have been following that series. It does make mdev more flexible, and almost turns it into a real bus.
> Even with those improvements to mdev the virtual-bus is in my view still more generic and more flexible,
> and hence more future-proof.


So the only difference so far is after that series is:

1) mdev has sysfs support
2) mdev has support from vfio

For 1) we can decouple that part to be more flexible, for 2) I think you 
would still need that part other than inventing a new VFIO driver (e.g 
vfio-virtual-bus)?


>
>>>> What for you are the advantages of mdev to me are some of it's
>>> disadvantages.
>>>> The way I see it we can provide rdma support in the driver using virtual-bus.
>> This is fine, because it is only used for matching service.
>>
>>> Yes, but since it does matching only, you can do everything you want.
>>> But it looks to me Greg does not want a bus to be an API multiplexer. So if a
>>> dedicated bus is desired, it won't be much of code to have a bus on your own.
> I did not intend for it to be a multiplexer. And I very much prefer a generic bus over a any driver specific bus.
>
>> Right. virtbus shouldn't be a multiplexer.
>> Otherwise mdev can be improved (abused) exactly the way virtbus might. Where 'mdev m stands for multiplexer too'. :-)
>> No, we shouldn’t do that.
>>
>> Listening to Greg and Jason G, I agree that virtbus shouldn't be a multiplexer.
>> There are few basic differences between subfunctions and matching service device object.
>> Subfunctions over period of time will have several attributes, few that I think of right away are:
>> 1. BAR resource info, write combine info
>> 2. irq vectors details
>> 3. unique id assigned by user (while virtbus will not assign such user id as they are auto created for matching service for PF/VF)
>> 4. rdma device created by matched driver resides on pci bus or parent device
>> While rdma and netdev created on over subfunctions are linked to their own 'struct device'.
> This is more aligned with my thinking as well, although I do not call these items subfunctions.
> There can be different devices for different users, where multiple can be active at the same time (with some constraints).
>
> One important thing to note is that there may not not be a netdev device. What we traditionally call
> a "network driver" will then only manage the virtualised devices.
>
>> Due to that sysfs view for these two different types of devices is bit different.
>> Putting both on same bus just doesn't appear right with above fundamental differences of core layer.
> Can you explain which code layer you mean?
>
>>>> At the moment we would need separate mdev support in the driver for
>>>> vdpa, but I hope at some point mdev would become a layer on top of virtual-
>>> bus.
>> How is it optimal to create multiple 'struct device' for single purpose?
>> Especially when one wants to create hundreds of such devices to begin with.
>> User facing tool should be able to select device type and place the device on right bus.
> At this point I think it is not possible to create a solution that is optimal right now for all use cases.


Probably yes.


> With the virtual bus we do have a solid foundation going forward, for the users we know now and for
> future ones.


If I understand correctly, if multiplexer is not preferred. It would be 
hard to have a bus on your own code, there's no much code could be reused.

Thanks


>   Optimisation is something that needs to happen over time, without breaking existing users.
>
> As for the user facing tool, the only one I know of that always works is "echo" into a sysfs file.
>
> Best regards,
> Martin
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ