[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZryfGDU9wHE0IrvZ@LQ3V64L9R2.home>
Date: Wed, 14 Aug 2024 13:12:08 +0100
From: Joe Damato <jdamato@...tly.com>
To: Jakub Kicinski <kuba@...nel.org>, netdev@...r.kernel.org,
Daniel Borkmann <daniel@...earbox.net>,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Harshitha Ramamurthy <hramamurthy@...gle.com>,
"moderated list:INTEL ETHERNET DRIVERS" <intel-wired-lan@...ts.osuosl.org>,
Jeroen de Borst <jeroendb@...gle.com>,
Jiri Pirko <jiri@...nulli.us>, Leon Romanovsky <leon@...nel.org>,
open list <linux-kernel@...r.kernel.org>,
"open list:MELLANOX MLX4 core VPI driver" <linux-rdma@...r.kernel.org>,
Lorenzo Bianconi <lorenzo@...nel.org>,
Paolo Abeni <pabeni@...hat.com>,
Praveen Kaligineedi <pkaligineedi@...gle.com>,
Przemek Kitszel <przemyslaw.kitszel@...el.com>,
Saeed Mahameed <saeedm@...dia.com>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Shailend Chand <shailend@...gle.com>,
Tariq Toukan <tariqt@...dia.com>,
Tony Nguyen <anthony.l.nguyen@...el.com>,
Willem de Bruijn <willemb@...gle.com>,
Yishai Hadas <yishaih@...dia.com>,
Ziwei Xiao <ziweixiao@...gle.com>
Subject: Re: [RFC net-next 0/6] Cleanup IRQ affinity checks in several drivers
On Wed, Aug 14, 2024 at 08:14:48AM +0100, Joe Damato wrote:
> On Tue, Aug 13, 2024 at 05:17:10PM -0700, Jakub Kicinski wrote:
> > On Mon, 12 Aug 2024 14:56:21 +0000 Joe Damato wrote:
> > > Several drivers make a check in their napi poll functions to determine
> > > if the CPU affinity of the IRQ has changed. If it has, the napi poll
> > > function returns a value less than the budget to force polling mode to
> > > be disabled, so that it can be rescheduled on the correct CPU next time
> > > the softirq is raised.
> >
> > Any reason not to use the irq number already stored in napi_struct ?
>
> Thanks for taking a look.
>
> IIUC, that's possible if i40e, iavf, and gve are updated to call
> netif_napi_set_irq first, which I could certainly do.
>
> But as Stanislav points out, I would be adding a call to
> irq_get_effective_affinity_mask in the hot path where one did not
> exist before for 4 of 5 drivers.
>
> In that case, it might make more sense to introduce:
>
> bool napi_affinity_no_change(const struct cpumask *aff_mask)
>
> instead and the drivers which have a cached mask can pass it in and
> gve can be updated later to cache it.
>
> Not sure how crucial avoiding the irq_get_effective_affinity_mask
> call is; I would guess maybe some driver owners would object to
> adding a new call in the hot path where one didn't exist before.
>
> What do you think?
Actually... how about a slightly different approach, which caches
the affinity mask in the core?
0. Extend napi struct to have a struct cpumask * field
1. extend netif_napi_set_irq to:
a. store the IRQ number in the napi struct (as you suggested)
b. call irq_get_effective_affinity_mask to store the mask in the
napi struct
c. set up generic affinity_notify.notify and
affinity_notify.release callbacks to update the in core mask
when it changes
2. add napi_affinity_no_change which now takes a napi_struct
3. cleanup all 5 drivers:
a. add calls to netif_napi_set_irq for all 5 (I think no RTNL
is needed, so I think this would be straight forward?)
b. remove all affinity_mask caching code in 4 of 5 drivers
c. update all 5 drivers to call napi_affinity_no_change in poll
Then ... anyone who adds support for netif_napi_set_irq to their
driver in the future gets automatic support in-core for
caching/updating of the mask? And in the future netdev-genl could
dump the mask since its in-core?
I'll mess around with that locally to see how it looks, but let me
know if that sounds like a better overall approach.
- Joe
Powered by blists - more mailing lists