[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240814080915.005cb9ac@kernel.org>
Date: Wed, 14 Aug 2024 08:09:15 -0700
From: Jakub Kicinski <kuba@...nel.org>
To: Joe Damato <jdamato@...tly.com>
Cc: netdev@...r.kernel.org, Daniel Borkmann <daniel@...earbox.net>, "David
S. Miller" <davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>,
Harshitha Ramamurthy <hramamurthy@...gle.com>, "moderated list:INTEL
ETHERNET DRIVERS" <intel-wired-lan@...ts.osuosl.org>, Jeroen de Borst
<jeroendb@...gle.com>, Jiri Pirko <jiri@...nulli.us>, Leon Romanovsky
<leon@...nel.org>, open list <linux-kernel@...r.kernel.org>, "open
list:MELLANOX MLX4 core VPI driver" <linux-rdma@...r.kernel.org>, Lorenzo
Bianconi <lorenzo@...nel.org>, Paolo Abeni <pabeni@...hat.com>, Praveen
Kaligineedi <pkaligineedi@...gle.com>, Przemek Kitszel
<przemyslaw.kitszel@...el.com>, Saeed Mahameed <saeedm@...dia.com>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>, Shailend Chand
<shailend@...gle.com>, Tariq Toukan <tariqt@...dia.com>, Tony Nguyen
<anthony.l.nguyen@...el.com>, Willem de Bruijn <willemb@...gle.com>, Yishai
Hadas <yishaih@...dia.com>, Ziwei Xiao <ziweixiao@...gle.com>
Subject: Re: [RFC net-next 0/6] Cleanup IRQ affinity checks in several
drivers
On Wed, 14 Aug 2024 13:12:08 +0100 Joe Damato wrote:
> Actually... how about a slightly different approach, which caches
> the affinity mask in the core?
I was gonna say :)
> 0. Extend napi struct to have a struct cpumask * field
>
> 1. extend netif_napi_set_irq to:
> a. store the IRQ number in the napi struct (as you suggested)
> b. call irq_get_effective_affinity_mask to store the mask in the
> napi struct
> c. set up generic affinity_notify.notify and
> affinity_notify.release callbacks to update the in core mask
> when it changes
This part I'm not an export on.
> 2. add napi_affinity_no_change which now takes a napi_struct
>
> 3. cleanup all 5 drivers:
> a. add calls to netif_napi_set_irq for all 5 (I think no RTNL
> is needed, so I think this would be straight forward?)
> b. remove all affinity_mask caching code in 4 of 5 drivers
> c. update all 5 drivers to call napi_affinity_no_change in poll
>
> Then ... anyone who adds support for netif_napi_set_irq to their
> driver in the future gets automatic support in-core for
> caching/updating of the mask? And in the future netdev-genl could
> dump the mask since its in-core?
>
> I'll mess around with that locally to see how it looks, but let me
> know if that sounds like a better overall approach.
Could we even handle this directly as part of __napi_poll(),
once the driver gives core all of the relevant pieces of information ?
Powered by blists - more mailing lists