[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20241202062639.30ddac57@kernel.org>
Date: Mon, 2 Dec 2024 06:26:39 -0800
From: Jakub Kicinski <kuba@...nel.org>
To: Ahmed Zaki <ahmed.zaki@...el.com>
Cc: <intel-wired-lan@...ts.osuosl.org>, <netdev@...r.kernel.org>
Subject: Re: [PATCH iwl-net 1/2] idpf: preserve IRQ affinity settings across
resets
On Mon, 2 Dec 2024 06:03:45 -0700 Ahmed Zaki wrote:
> On 2024-11-11 7:53 p.m., Jakub Kicinski wrote:
> > On Fri, 8 Nov 2024 17:12:05 -0700 Ahmed Zaki wrote:
> >> From: Sudheer Mogilappagari <sudheer.mogilappagari@...el.com>
> >>
> >> Currently the IRQ affinity settings are getting lost when interface
> >> goes through a soft reset (due to MTU configuration, changing number
> >> of queues etc). Use irq_set_affinity_notifier() callbacks to keep
> >> the IRQ affinity info in sync between driver and kernel.
> >
> > Could you try doing this in the core? Store the mask in napi_struct
> > if it has IRQ associated with it?
> >
> > Barely any drivers get this right.
>
> The napi structs are allocated/freed with open/close ndos. I don't think
> we should expect the user to re-set CPU affinity after link down/up.
The napi_config struct is persistent.
Powered by blists - more mailing lists