lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJ3xEMhfZjKKRGZOJ8CvrbDEQ9jh9kYvZ-aJBsC86ixm+1RnWw@mail.gmail.com>
Date:   Tue, 22 May 2018 17:50:45 +0300
From:   Or Gerlitz <gerlitz.or@...il.com>
To:     Jakub Kicinski <jakub.kicinski@...ronome.com>
Cc:     David Miller <davem@...emloft.net>,
        Linux Netdev List <netdev@...r.kernel.org>,
        oss-drivers@...ronome.com, Andy Gospodarek <gospo@...adcom.com>,
        linux-internal <linux-internal@...lanox.com>
Subject: Re: [PATCH net-next 00/13] nfp: abm: add basic support for advanced
 buffering NIC

On Tue, May 22, 2018 at 10:56 AM, Jakub Kicinski
<jakub.kicinski@...ronome.com> wrote:
> On Mon, May 21, 2018 at 11:32 PM, Or Gerlitz wrote:
>> On Tue, May 22, 2018 at 8:12 AM, Jakub Kicinski wrote:
>>> Hi!
>>>
>>> This series lays groundwork for advanced buffer management NIC feature.
>>> It makes necessary NFP core changes, spawns representors and adds devlink
>>> glue.  Following series will add the actual buffering configuration (patch
>>> series size limit).
>>>
>>> First three patches add support for configuring NFP buffer pools via a
>>> mailbox.  The existing devlink APIs are used for the purpose.
>>>
>>> Third patch allows us to perform small reads from the NFP memory.
>>>
>>> The rest of the patch set adds eswitch mode change support and makes
>>> the driver spawn appropriate representors.
>>
>> Hi Jakub,
>>
>> Could you provide more higher level description on the abm use-case
>> and nature of these representors? I understand that under abm you are
>> modeling the nic as switch with vNIC ports, does vNIC port and vNIC
>> port rep have the same characteristics as VF and VF rep (xmit on one side
>> <--> send on 2nd side),  does traffic is to be offloaded using TC, etc.
>> What one would be doing with vNIC instance, hand it to container ala the Intel
>> VMDQ concept?
>> can this be seen as veth HW offload? etc

> Yes, the reprs can be used like VF reprs but that's not the main use
> case. We are targeting container world with ABM, so no VFs and no
> SR-IOV.  There is only one vNIC per port and no veth offload etc. In

one vNIC for multiple containers? or you have a (v?) port per container?

> In the most basic scenario with 1 PF corresponding to 1 port there is no
> real use for switching.

multiple containers? please clarify it a little better

> The main purpose here is that we want to setup the buffering and QoS
> inside the NIC (both for TX and RX) and then use eBPF to perform
> filtering, queue assignment and per-application RSS. That's pretty
> much it at this point.

> Switching if any will be a basic bridge offload.  QoS configuration
> will all be done using TC qdisc offload, RED etc. exactly like mlxsw :)

I guess I'll understand it better once you clarify the multiple
containers thing,
thanks for the details and openness

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ