lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180522121415.500330ad@cakuba>
Date:   Tue, 22 May 2018 12:14:15 -0700
From:   Jakub Kicinski <jakub.kicinski@...ronome.com>
To:     Or Gerlitz <gerlitz.or@...il.com>
Cc:     David Miller <davem@...emloft.net>,
        Linux Netdev List <netdev@...r.kernel.org>,
        oss-drivers@...ronome.com, Andy Gospodarek <gospo@...adcom.com>,
        linux-internal <linux-internal@...lanox.com>
Subject: Re: [PATCH net-next 00/13] nfp: abm: add basic support for advanced
 buffering NIC

On Tue, 22 May 2018 17:50:45 +0300, Or Gerlitz wrote:
> On Tue, May 22, 2018 at 10:56 AM, Jakub Kicinski wrote:
> > On Mon, May 21, 2018 at 11:32 PM, Or Gerlitz wrote:  
> >> On Tue, May 22, 2018 at 8:12 AM, Jakub Kicinski wrote:  
> >>> Hi!
> >>>
> >>> This series lays groundwork for advanced buffer management NIC feature.
> >>> It makes necessary NFP core changes, spawns representors and adds devlink
> >>> glue.  Following series will add the actual buffering configuration (patch
> >>> series size limit).
> >>>
> >>> First three patches add support for configuring NFP buffer pools via a
> >>> mailbox.  The existing devlink APIs are used for the purpose.
> >>>
> >>> Third patch allows us to perform small reads from the NFP memory.
> >>>
> >>> The rest of the patch set adds eswitch mode change support and makes
> >>> the driver spawn appropriate representors.  
> >>
> >> Hi Jakub,
> >>
> >> Could you provide more higher level description on the abm use-case
> >> and nature of these representors? I understand that under abm you are
> >> modeling the nic as switch with vNIC ports, does vNIC port and vNIC
> >> port rep have the same characteristics as VF and VF rep (xmit on one side
> >> <--> send on 2nd side),  does traffic is to be offloaded using TC, etc.
> >> What one would be doing with vNIC instance, hand it to container ala the Intel
> >> VMDQ concept?
> >> can this be seen as veth HW offload? etc  
> 
> > Yes, the reprs can be used like VF reprs but that's not the main use
> > case. We are targeting container world with ABM, so no VFs and no
> > SR-IOV.  There is only one vNIC per port and no veth offload etc. In  
> 
> one vNIC for multiple containers? or you have a (v?) port per container?

One vNIC with many queues for multiple containers.  If containers have
QoS requirements they should be pinned to specific CPUs and eBPF on the
card can be used to RSS only to the queues associated with those CPUs,
while TC qdisc offload can be used to manipulate the scheduling of
those queues (and ECN marking).

> > In the most basic scenario with 1 PF corresponding to 1 port there is no
> > real use for switching.  
> 
> multiple containers? please clarify it a little better

Yes, if we had a netdev per container that would mean we would need
switching.  But vetch/macvlan/ipvlan offload and therefore multiple
netdevs is not a requirement right now.

IOW we have the eBPF programmable RSS and we are trying to extend the
QoS to be able to make sure that critical/important containers' RX/TX
doesn't get disturbed by less important workloads.

> > The main purpose here is that we want to setup the buffering and QoS
> > inside the NIC (both for TX and RX) and then use eBPF to perform
> > filtering, queue assignment and per-application RSS. That's pretty
> > much it at this point.  
> >
> > Switching if any will be a basic bridge offload.  QoS configuration
> > will all be done using TC qdisc offload, RED etc. exactly like mlxsw :)  
> 
> I guess I'll understand it better once you clarify the multiple
> containers thing,
> thanks for the details and openness

I hope my clarifications help :)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ