lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241210202111.7d3a2dc8@kernel.org>
Date: Tue, 10 Dec 2024 20:21:11 -0800
From: Jakub Kicinski <kuba@...nel.org>
To: Joe Damato <jdamato@...tly.com>
Cc: Ahmed Zaki <ahmed.zaki@...el.com>, netdev@...r.kernel.org,
 intel-wired-lan@...ts.osuosl.org, andrew+netdev@...n.ch,
 edumazet@...gle.com, pabeni@...hat.com, davem@...emloft.net,
 michael.chan@...adcom.com, tariqt@...dia.com, anthony.l.nguyen@...el.com,
 przemyslaw.kitszel@...el.com
Subject: Re: [PATCH v1 net-next 2/6] net: napi: add CPU affinity to
 napi->config

On Mon, 9 Dec 2024 17:29:00 -0800 Joe Damato wrote:
> My understanding when I attempted this was that using generic IRQ
> notifiers breaks ARFS [1], because IRQ notifiers only support a
> single notifier and so drivers with ARFS can't _also_ set their own
> notifiers for that.

Ah, you are so right, I forgot the details and was grepping for
notifier registration :S

> Two ideas were proposed in the thread I mentioned:
>   1. Have multiple notifiers per IRQ so that having a generic core
>      based notifier wouldn't break ARFS.
>   2. Jakub mentioned calling cpu_rmap_update from the core so that a
>      generic solution wouldn't be blocked.
> 
> I don't know anything about option 1, so I looked at option 2.
> 
> At the time when I read the code, it seemed that cpu_rmap_update
> required some state be passed in (struct irq_glue), so in that case,
> the only way to call cpu_rmap_update from the core would be to
> maintain some state about ARFS in the core, too, so that drivers
> which support ARFS won't be broken by this change.
> 
> At that time there was no persistent per-NAPI config, but since
> there is now, there might be a way to solve this.
> 
> Just guessing here, but maybe one way to solve this would be to move
> ARFS into the core by:
>   - Adding a new bit in addition to NAPIF_F_IRQ_AFFINITY... I don't
>     know NAPIF_F_ARFS_AFFINITY or something? so that drivers
>     could express that they support ARFS.
>   - Remove the driver calls to irq_cpu_rmap_add and make sure to
>     pass the new bit in for drivers that support ARFS (in your
>     changeset, I believe that would be at least ice, mlx4, and
>     bnxt... possibly more?).
>   - In the generic core code, if the ARFS bit is set then you pass
>     in the state needed for ARFS to work, otherwise do what the
>     proposed code is doing now.

SG, maybe I'd put the flag(s) in struct net_device, so that we are sure 
the whole device either wants the new behavior or not.

I say flag(s) because idpf probably wants just to opt in for affinity
mask management, without ARFS. So pretty close to Ahmed's existing code.
ARFS support will require another flag to ask for rmap management in
the core as you describe.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ