[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZrzxBAWwA7EuRB24@LQ3V64L9R2>
Date: Wed, 14 Aug 2024 19:01:40 +0100
From: Joe Damato <jdamato@...tly.com>
To: Shay Drori <shayd@...dia.com>
Cc: Jakub Kicinski <kuba@...nel.org>, netdev@...r.kernel.org,
Daniel Borkmann <daniel@...earbox.net>,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Harshitha Ramamurthy <hramamurthy@...gle.com>,
"moderated list:INTEL ETHERNET DRIVERS" <intel-wired-lan@...ts.osuosl.org>,
Jeroen de Borst <jeroendb@...gle.com>,
Jiri Pirko <jiri@...nulli.us>, Leon Romanovsky <leon@...nel.org>,
open list <linux-kernel@...r.kernel.org>,
"open list:MELLANOX MLX4 core VPI driver" <linux-rdma@...r.kernel.org>,
Lorenzo Bianconi <lorenzo@...nel.org>,
Paolo Abeni <pabeni@...hat.com>,
Praveen Kaligineedi <pkaligineedi@...gle.com>,
Przemek Kitszel <przemyslaw.kitszel@...el.com>,
Saeed Mahameed <saeedm@...dia.com>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Shailend Chand <shailend@...gle.com>,
Tariq Toukan <tariqt@...dia.com>,
Tony Nguyen <anthony.l.nguyen@...el.com>,
Willem de Bruijn <willemb@...gle.com>,
Yishai Hadas <yishaih@...dia.com>,
Ziwei Xiao <ziweixiao@...gle.com>
Subject: Re: [RFC net-next 0/6] Cleanup IRQ affinity checks in several drivers
On Wed, Aug 14, 2024 at 07:03:35PM +0300, Shay Drori wrote:
>
>
> On 14/08/2024 18:19, Joe Damato wrote:
> > On Wed, Aug 14, 2024 at 08:09:15AM -0700, Jakub Kicinski wrote:
> > > On Wed, 14 Aug 2024 13:12:08 +0100 Joe Damato wrote:
> > > > Actually... how about a slightly different approach, which caches
> > > > the affinity mask in the core?
> > >
> > > I was gonna say :)
> > >
> > > > 0. Extend napi struct to have a struct cpumask * field
> > > >
> > > > 1. extend netif_napi_set_irq to:
> > > > a. store the IRQ number in the napi struct (as you suggested)
> > > > b. call irq_get_effective_affinity_mask to store the mask in the
> > > > napi struct
> > > > c. set up generic affinity_notify.notify and
> > > > affinity_notify.release callbacks to update the in core mask
> > > > when it changes
> > >
> > > This part I'm not an export on.
>
> several net drivers (mlx5, mlx4, ice, ena and more) are using a feature
> called ARFS (rmap)[1], and this feature is using the affinity notifier
> mechanism.
> Also, affinity notifier infra is supporting only a single notifier per
> IRQ.
>
> Hence, your suggestion (1.c) will break the ARFS feature.
>
> [1] see irq_cpu_rmap_add()
Thanks for taking a look and your reply.
I did notice ARFS use by some drivers and figured that might be why
the notifiers were being used in some cases.
I guess the question comes down to whether adding a call to
irq_get_effective_affinity_mask in the hot path is a bad idea.
If it is, then the only option is to have the drivers pass in their
IRQ affinity masks, as Stanislav suggested, to avoid adding that
call to the hot path.
If not, then the IRQ from napi_struct can be used and the affinity
mask can be generated on every napi poll. i40e/gve/iavf would need
calls to netif_napi_set_irq to set the IRQ mapping, which seems to
be straightforward.
In both cases: the IRQ notifier stuff would be left as is so that it
wouldn't break ARFS.
I suspect that the preferred solution would be to avoid adding that
call to the hot path, right?
Powered by blists - more mailing lists