lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <baa4bd4b3aa0639d29e5c396bd3da94e01cd8528.camel@redhat.com>
Date: Mon, 18 Dec 2023 21:12:35 +0100
From: Paolo Abeni <pabeni@...hat.com>
To: Jakub Kicinski <kuba@...nel.org>
Cc: Jiri Pirko <jiri@...nulli.us>, netdev@...r.kernel.org, 
 anthony.l.nguyen@...el.com, intel-wired-lan@...ts.osuosl.org, 
 qi.z.zhang@...el.com, Wenjun Wu <wenjun1.wu@...el.com>,
 maxtram95@...il.com,  "Chittim, Madhu" <madhu.chittim@...el.com>,
 "Samudrala, Sridhar" <sridhar.samudrala@...el.com>, Simon Horman
 <simon.horman@...hat.com>
Subject: Re: [Intel-wired-lan] [PATCH iwl-next v4 0/5] iavf: Add devlink and
 devlink rate support'

On Fri, 2023-12-15 at 14:41 -0800, Jakub Kicinski wrote:
> I explained before (perhaps on the netdev call) - Qdiscs have two
> different offload models. "local" and "switchdev", here we want "local"
> AFAIU and TBF only has "switchdev" offload (take a look at the enqueue
> method and which drivers support it today).

I must admit the above is not yet clear to me.

I initially thought you meant that "local" offloads properly
reconfigure the S/W datapath so that locally generated traffic would go
through the expected processing (e.g. shaping) just once, while with
"switchdev" offload locally generated traffic will see shaping done
both by the S/W and the H/W[1].

Reading the above I now think you mean that local offloads has only
effect for locally generated traffic but not on traffic forwarded via
eswitch, and vice versa[2]. 

The drivers I looked at did not show any clue (to me).

FTR, I think that [1] is a bug worth fixing and [2] is evil ;)

Could you please clarify which is the difference exactly between them?

> "We'll extend TBF" is very much adding a new API. You'll have to add
> "local offload" support in TBF and no NIC driver today supports it.
> I'm not saying TBF is bad, but I disagree that it's any different
> than a new NDO for all practical purposes.
> 
> > ndo_setup_tc() feels like the natural choice for H/W offload and TBF
> > is the existing interface IMHO nearest to the requirements here.
> 
> I question whether something as basic as scheduling and ACLs should
> follow the "offload SW constructs" mantra. You are exposed to more
> diverse users so please don't hesitate to disagree, but AFAICT
> the transparent offload (user installs SW constructs and if offload
> is available - offload, otherwise use SW is good enough) has not
> played out like we have hoped.
> 
> Let's figure out what is the abstract model of scheduling / shaping
> within a NIC that we want to target. And then come up with a way of
> representing it in SW. Not which uAPI we can shoehorn into the use
> case.

I thought the model was quite well defined since the initial submission
from Intel, and is quite simple: expose TX shaping on per tx queue
basis, with min rate, max rate (in bps) and burst (in bytes).

I think that making it more complex (e.g. with nesting, pkt overhead,
etc) we will still not cover every possible use case and will add
considerable complexity.
> 
Cheers,

Paolo


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ