[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <6431a069-6fc5-47ad-9519-868ae84b4a1a@intel.com>
Date: Wed, 13 Dec 2023 12:23:43 +0100
From: Alexander Lobakin <aleksander.lobakin@...el.com>
To: Jakub Kicinski <kuba@...nel.org>
CC: Paul Menzel <pmenzel@...gen.mpg.de>, Maciej Fijalkowski
<maciej.fijalkowski@...el.com>, Jesper Dangaard Brouer <hawk@...nel.org>,
Amritha Nambiar <amritha.nambiar@...el.com>, Larysa Zaremba
<larysa.zaremba@...el.com>, <netdev@...r.kernel.org>, Alexander Duyck
<alexanderduyck@...com>, Ilias Apalodimas <ilias.apalodimas@...aro.org>,
"Eric Dumazet" <edumazet@...gle.com>, <linux-kernel@...r.kernel.org>,
Yunsheng Lin <linyunsheng@...wei.com>, Michal Kubiak
<michal.kubiak@...el.com>, <intel-wired-lan@...ts.osuosl.org>, David
Christensen <drc@...ux.vnet.ibm.com>, Paolo Abeni <pabeni@...hat.com>, "David
S. Miller" <davem@...emloft.net>
Subject: Re: [Intel-wired-lan] [PATCH net-next v6 08/12] libie: add Rx buffer
management (via Page Pool)
From: Jakub Kicinski <kuba@...nel.org>
Date: Mon, 11 Dec 2023 11:23:32 -0800
> On Mon, 11 Dec 2023 11:16:20 +0100 Alexander Lobakin wrote:
>> Ideally, I'd like to pass a CPU ID this queue will be run on and use
>> cpu_to_node(), but currently there's no NUMA-aware allocations in the
>> Intel drivers and Rx queues don't get the corresponding CPU ID when
>> configuring. I may revisit this later, but for now NUMA_NO_NODE is the
>> most optimal solution here.
>
> Hm, I've been wondering about persistent IRQ mappings. Drivers
> resetting IRQ mapping on reconfiguration is a major PITA in production
> clusters. You change the RSS hash and some NICs suddenly forget
> affinitization 🤯️
>
> The connection with memory allocations changes the math on that a bit.
>
> The question is really whether we add CPU <> NAPI config as a netdev
> Netlink API or build around the generic IRQ affinity API. The latter
> is definitely better from "don't duplicate uAPI" perspective.
> But we need to reset the queues and reallocate their state when
> the mapping is changed. And shutting down queues on
>
> echo $cpu > /../smp_affinity_list
>
> seems moderately insane. Perhaps some middle-ground exists.
>
> Anyway, if you do find cycles to tackle this - pls try to do it
> generically not just for Intel? :)
Sounds good, adding to my fathomless backlog :>
Thanks,
Olek
Powered by blists - more mailing lists