[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241220113711.5b09140b@kernel.org>
Date: Fri, 20 Dec 2024 11:37:11 -0800
From: Jakub Kicinski <kuba@...nel.org>
To: Ahmed Zaki <ahmed.zaki@...el.com>
Cc: <netdev@...r.kernel.org>, <intel-wired-lan@...ts.osuosl.org>,
<andrew+netdev@...n.ch>, <edumazet@...gle.com>, <pabeni@...hat.com>,
<davem@...emloft.net>, <michael.chan@...adcom.com>, <tariqt@...dia.com>,
<anthony.l.nguyen@...el.com>, <przemyslaw.kitszel@...el.com>,
<jdamato@...tly.com>, <shayd@...dia.com>, <akpm@...ux-foundation.org>
Subject: Re: [PATCH net-next v2 4/8] net: napi: add CPU affinity to
napi->config
On Fri, 20 Dec 2024 12:15:33 -0700 Ahmed Zaki wrote:
> > I don't understand what you're trying to say, could you rephrase?
>
> Sure. After this patch, we have (simplified):
>
> void netif_napi_set_irq(struct napi_struct *napi, int irq, unsigned long
> flags)
> {
> struct irq_glue *glue = NULL;
> int rc;
>
> napi->irq = irq;
>
> #ifdef CONFIG_RFS_ACCEL
> if (napi->dev->rx_cpu_rmap && flags & NAPIF_IRQ_ARFS_RMAP) {
> rc = irq_cpu_rmap_add(napi->dev->rx_cpu_rmap, irq, napi,
> netif_irq_cpu_rmap_notify);
> .
> .
> .
> }
> #endif
>
> if (flags & NAPIF_IRQ_AFFINITY) {
> glue = kzalloc(sizeof(*glue), GFP_KERNEL);
> if (!glue)
> return;
> glue->notify.notify = netif_irq_cpu_rmap_notify;
> glue->notify.release = netif_napi_affinity_release;
> .
> .
> }
> }
>
>
> Both branches assign the new cb function "netif_irq_cpu_rmap_notify()"
> as the new IRQ notifier, but the first branch calls irq_cpu_rmap_add()
> where the notifier is embedded in "struct irq_glue". So the cb function
> needs to assume the notifier is inside irq_glue, so the second "if"
> branch needs to do the same.
First off, I'm still a bit confused why you think the flags should be
per NAPI call and not set at init time, once.
Perhaps rename netif_enable_cpu_rmap() suggested earlier to something
more generic (netif_enable_irq_tracking()?) and pass the flags there?
Or is there a driver which wants to vary the flags per NAPI instance?
Then you can probably register a single unified handler, and inside
that handler check if the device wanted to have rmap or just affinity?
Powered by blists - more mailing lists