[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <701eb84c-8d26-4945-8af3-55a70e05b09c@nvidia.com>
Date: Wed, 14 Aug 2024 19:03:35 +0300
From: Shay Drori <shayd@...dia.com>
To: Joe Damato <jdamato@...tly.com>, Jakub Kicinski <kuba@...nel.org>,
<netdev@...r.kernel.org>, Daniel Borkmann <daniel@...earbox.net>, "David S.
Miller" <davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>, "Harshitha
Ramamurthy" <hramamurthy@...gle.com>, "moderated list:INTEL ETHERNET DRIVERS"
<intel-wired-lan@...ts.osuosl.org>, Jeroen de Borst <jeroendb@...gle.com>,
Jiri Pirko <jiri@...nulli.us>, Leon Romanovsky <leon@...nel.org>, open list
<linux-kernel@...r.kernel.org>, "open list:MELLANOX MLX4 core VPI driver"
<linux-rdma@...r.kernel.org>, Lorenzo Bianconi <lorenzo@...nel.org>, "Paolo
Abeni" <pabeni@...hat.com>, Praveen Kaligineedi <pkaligineedi@...gle.com>,
Przemek Kitszel <przemyslaw.kitszel@...el.com>, Saeed Mahameed
<saeedm@...dia.com>, Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Shailend Chand <shailend@...gle.com>, Tariq Toukan <tariqt@...dia.com>, "Tony
Nguyen" <anthony.l.nguyen@...el.com>, Willem de Bruijn <willemb@...gle.com>,
Yishai Hadas <yishaih@...dia.com>, Ziwei Xiao <ziweixiao@...gle.com>
Subject: Re: [RFC net-next 0/6] Cleanup IRQ affinity checks in several drivers
On 14/08/2024 18:19, Joe Damato wrote:
> On Wed, Aug 14, 2024 at 08:09:15AM -0700, Jakub Kicinski wrote:
>> On Wed, 14 Aug 2024 13:12:08 +0100 Joe Damato wrote:
>>> Actually... how about a slightly different approach, which caches
>>> the affinity mask in the core?
>>
>> I was gonna say :)
>>
>>> 0. Extend napi struct to have a struct cpumask * field
>>>
>>> 1. extend netif_napi_set_irq to:
>>> a. store the IRQ number in the napi struct (as you suggested)
>>> b. call irq_get_effective_affinity_mask to store the mask in the
>>> napi struct
>>> c. set up generic affinity_notify.notify and
>>> affinity_notify.release callbacks to update the in core mask
>>> when it changes
>>
>> This part I'm not an export on.
several net drivers (mlx5, mlx4, ice, ena and more) are using a feature
called ARFS (rmap)[1], and this feature is using the affinity notifier
mechanism.
Also, affinity notifier infra is supporting only a single notifier per
IRQ.
Hence, your suggestion (1.c) will break the ARFS feature.
[1] see irq_cpu_rmap_add()
>>
>>> 2. add napi_affinity_no_change which now takes a napi_struct
>>>
>>> 3. cleanup all 5 drivers:
>>> a. add calls to netif_napi_set_irq for all 5 (I think no RTNL
>>> is needed, so I think this would be straight forward?)
>>> b. remove all affinity_mask caching code in 4 of 5 drivers
>>> c. update all 5 drivers to call napi_affinity_no_change in poll
>>>
>>> Then ... anyone who adds support for netif_napi_set_irq to their
>>> driver in the future gets automatic support in-core for
>>> caching/updating of the mask? And in the future netdev-genl could
>>> dump the mask since its in-core?
>>>
>>> I'll mess around with that locally to see how it looks, but let me
>>> know if that sounds like a better overall approach.
>
> I ended up going with the approach laid out above; moving the IRQ
> affinity mask updating code into the core (which adds that ability
> to gve/mlx4/mlx5... it seems mlx4/5 cached but didn't have notifiers
> setup to update the cached copy?)
maybe This is probably due to what I wrote above..
> and adding calls to
> netif_napi_set_irq in i40e/iavf and deleting their custom notifier
> code.
>
> It's almost ready for rfcv2; I think this approach is probably
> better ?
>
>> Could we even handle this directly as part of __napi_poll(),
>> once the driver gives core all of the relevant pieces of information ?
>
> I had been thinking the same thing, too, but it seems like at least
> one driver (mlx5) counts the number of affinity changes to export as
> a stat, so moving all of this to core would break that.
>
> So, I may avoid attempting that for this series.
>
> I'm still messing around with this but will send an rfcv2 in a bit.
>
Powered by blists - more mailing lists