[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e420a11f-1c07-4a3f-85b4-b7679b4e50ce@huawei.com>
Date: Wed, 9 Oct 2024 11:33:02 +0800
From: Yunsheng Lin <linyunsheng@...wei.com>
To: Jakub Kicinski <kuba@...nel.org>
CC: <davem@...emloft.net>, <pabeni@...hat.com>, <liuyonglong@...wei.com>,
<fanghaiqing@...wei.com>, <zhangkun09@...wei.com>, Alexander Lobakin
<aleksander.lobakin@...el.com>, Jesper Dangaard Brouer <hawk@...nel.org>,
Ilias Apalodimas <ilias.apalodimas@...aro.org>, Eric Dumazet
<edumazet@...gle.com>, <netdev@...r.kernel.org>,
<linux-kernel@...r.kernel.org>
Subject: Re: [PATCH net v2 1/2] page_pool: fix timing for checking and
disabling napi_local
On 2024/10/9 8:40, Jakub Kicinski wrote:
> On Wed, 25 Sep 2024 15:57:06 +0800 Yunsheng Lin wrote:
>> Use rcu mechanism to avoid the above concurrent access problem.
>>
>> Note, the above was found during code reviewing on how to fix
>> the problem in [1].
>
> The driver must make sure NAPI cannot be running while
> page_pool_destroy() is called. There's even an WARN()
> checking this.. if you know what to look for.
I am guessing you are referring to the WARN() in
page_pool_disable_direct_recycling(), right?
If yes, I am aware of that WARN().
The problem is that at least from the skb_defer_free_flush()
case, it is not tied to any specific napi instance. When
skb_attempt_defer_free() calls kick_defer_list_purge() to
trigger running of net_rx_action(), skb_defer_free_flush() can
be called without tieing to any specific napi instance as my
understanding:
https://elixir.bootlin.com/linux/v6.7-rc8/source/net/core/dev.c#L6719
Or I am missing something obvious here? I even used below diff to
verify that and it did trigger without any napi in the sd->poll_list:
@@ -6313,6 +6313,9 @@ static void skb_defer_free_flush(struct softnet_data *sd)
spin_unlock(&sd->defer_lock);
while (skb != NULL) {
+ if (list_empty(&sd->poll_list))
+ pr_err("defer freeing: %px with empty napi list\n", skb);
+
next = skb->next;
napi_consume_skb(skb, 1);
skb = next;
>
>
Powered by blists - more mailing lists