lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Sat, 18 Nov 2023 08:48:43 -0800
From: Jakub Kicinski <kuba@...nel.org>
To: "Zhang, Xuejun" <xuejun.zhang@...el.com>
Cc: Jiri Pirko <jiri@...nulli.us>, <netdev@...r.kernel.org>,
 <anthony.l.nguyen@...el.com>, <intel-wired-lan@...ts.osuosl.org>,
 <qi.z.zhang@...el.com>, Wenjun Wu <wenjun1.wu@...el.com>,
 <maxtram95@...il.com>, "Chittim, Madhu" <madhu.chittim@...el.com>,
 "Samudrala, Sridhar" <sridhar.samudrala@...el.com>, <pabeni@...hat.com>
Subject: Re: [Intel-wired-lan] [PATCH iwl-next v4 0/5] iavf: Add devlink and
 devlink rate support'

On Thu, 16 Nov 2023 21:52:49 -0800 Zhang, Xuejun wrote:
> Thanks for looking into our last patch with devlink API. Really 
> appreciate your candid review.
> 
> Following your suggestion, we have looked into 3 tc offload options to 
> support queue rate limiting
> 
> #1 mq + matchall + police
> 
> #2 mq + tbf

You can extend mqprio, too, if you wanted.

> #3 htb
> 
> all 3 tc offload options require some level of tc extensions to support 
> VF tx queue rate limiting (tx_maxrate & tx_minrate)
> 
> htb offload requires minimal tc changes or no change with similar change 
> done @ driver (we can share patch for review).
> 
> After discussing with Maxim Mikityanskiy( 
> https://lore.kernel.org/netdev/54a7dd27-a612-46f1-80dd-b43e28f8e4ce@intel.com/ 
> ), looks like sysfs interface with tx_minrate extension could be the 
> option we can take.
> 
> Look forward your opinion & guidance. Thanks for your time!

My least favorite thing to do is to configure the same piece of silicon
with 4 different SW interfaces. It's okay if we have 4 different uAPIs
(user level APIs) but the driver should not be exposed to all these
options.

I'm saying 4 but really I can think of 6 ways of setting maxrate :(

IMHO we need to be a bit more realistic about the notion of "offloading
the SW thing" for qdiscs specifically. Normally we offload SW constructs
to have a fallback and have a clear definition of functionality.
I bet most data-centers will use BPF+FQ these days, so the "fallback"
argument does not apply. And the "clear definition" when it comes to
basic rate limiting is.. moot.

Besides we already have mqprio, sysfs maxrate, sriov ndo, devlink rate,
none of which have SW fallback.

So since you asked for my opinion - my opinion is that step 1 is to
create a common representation of what we already have and feed it
to the drivers via a single interface. I could just be taking sysfs
maxrate and feeding it to the driver via the devlink rate interface.
If we have the right internals I give 0 cares about what uAPI you pick.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ