lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 24 Nov 2020 15:01:29 +0800
From:   Jason Wang <jasowang@...hat.com>
To:     "Samudrala, Sridhar" <sridhar.samudrala@...el.com>,
        Alexander Duyck <alexander.duyck@...il.com>,
        Jakub Kicinski <kuba@...nel.org>
Cc:     David Ahern <dsahern@...il.com>, Jason Gunthorpe <jgg@...dia.com>,
        Parav Pandit <parav@...dia.com>,
        Saeed Mahameed <saeed@...nel.org>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "linux-rdma@...r.kernel.org" <linux-rdma@...r.kernel.org>,
        "gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>,
        Jiri Pirko <jiri@...dia.com>,
        "dledford@...hat.com" <dledford@...hat.com>,
        Leon Romanovsky <leonro@...dia.com>,
        "davem@...emloft.net" <davem@...emloft.net>
Subject: Re: [PATCH net-next 00/13] Add mlx5 subfunction support


On 2020/11/21 上午3:04, Samudrala, Sridhar wrote:
>
>
> On 11/20/2020 9:58 AM, Alexander Duyck wrote:
>> On Thu, Nov 19, 2020 at 5:29 PM Jakub Kicinski <kuba@...nel.org> wrote:
>>>
>>> On Wed, 18 Nov 2020 21:35:29 -0700 David Ahern wrote:
>>>> On 11/18/20 7:14 PM, Jakub Kicinski wrote:
>>>>> On Tue, 17 Nov 2020 14:49:54 -0400 Jason Gunthorpe wrote:
>>>>>> On Tue, Nov 17, 2020 at 09:11:20AM -0800, Jakub Kicinski wrote:
>>>>>>
>>>>>>>> Just to refresh all our memory, we discussed and settled on the 
>>>>>>>> flow
>>>>>>>> in [2]; RFC [1] followed this discussion.
>>>>>>>>
>>>>>>>> vdpa tool of [3] can add one or more vdpa device(s) on top of 
>>>>>>>> already
>>>>>>>> spawned PF, VF, SF device.
>>>>>>>
>>>>>>> Nack for the networking part of that. It'd basically be VMDq.
>>>>>>
>>>>>> What are you NAK'ing?
>>>>>
>>>>> Spawning multiple netdevs from one device by slicing up its queues.
>>>>
>>>> Why do you object to that? Slicing up h/w resources for virtual what
>>>> ever has been common practice for a long time.
>>>
>>> My memory of the VMDq debate is hazy, let me rope in Alex into this.
>>> I believe the argument was that we should offload software constructs,
>>> not create HW-specific APIs which depend on HW availability and
>>> implementation. So the path we took was offloading macvlan.
>>
>> I think it somewhat depends on the type of interface we are talking
>> about. What we were wanting to avoid was drivers spawning their own
>> unique VMDq netdevs and each having a different way of doing it. The
>> approach Intel went with was to use a MACVLAN offload to approach it.
>> Although I would imagine many would argue the approach is somewhat
>> dated and limiting since you cannot do many offloads on a MACVLAN
>> interface.
>
> Yes. We talked about this at netdev 0x14 and the limitations of 
> macvlan based offloads.
> https://netdevconf.info/0x14/session.html?talk-hardware-acceleration-of-container-networking-interfaces 
>
>
> Subfunction seems to be a good model to expose VMDq VSI or SIOV ADI as 
> a netdev for kernel containers. AF_XDP ZC in a container is one of the 
> usecase this would address. Today we have to pass the entire PF/VF to 
> a container to do AF_XDP.
>
> Looks like the current model is to create a subfunction of a specific 
> type on auxiliary bus, do some configuration to assign resources and 
> then activate the subfunction.
>
>>
>> With the VDPA case I believe there is a set of predefined virtio
>> devices that are being emulated and presented so it isn't as if they
>> are creating a totally new interface for this.


vDPA doesn't have any limitation of how the devices is created or 
implemented. It could be predefined or created dynamically. vDPA leaves 
all of those to the parent device with the help of a unified management 
API[1]. E.g It could be a PCI device (PF or VF), sub-function or  
software emulated devices.


>>
>> What I would be interested in seeing is if there are any other vendors
>> that have reviewed this and sign off on this approach.


For "this approach" do you mean vDPA subfucntion? My understanding is 
that it's totally vendor specific, vDPA subsystem don't want to be 
limited by a specific type of device.


>> What we don't
>> want to see is Nivida/Mellanox do this one way, then Broadcom or Intel
>> come along later and have yet another way of doing this. We need an
>> interface and feature set that will work for everyone in terms of how
>> this will look going forward.

For feature set,  it would be hard to force (we can have a 
recommendation set of features) vendors to implement a common set of 
features consider they can be negotiated. So the management interface is 
expected to implement features like cpu clusters in order to make sure 
the migration compatibility, or qemu can assist for the missing feature 
with performance lose.

Thanks


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ