[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <d769109c-2479-4050-a24a-e79323620a62@kernel.org>
Date: Mon, 19 Jan 2026 18:26:55 +0100
From: Jesper Dangaard Brouer <hawk@...nel.org>
To: Jakub Kicinski <kuba@...nel.org>
Cc: Leon Hwang <leon.hwang@...ux.dev>, netdev@...r.kernel.org,
Ilias Apalodimas <ilias.apalodimas@...aro.org>,
Steven Rostedt <rostedt@...dmis.org>, Masami Hiramatsu
<mhiramat@...nel.org>, Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
"David S . Miller" <davem@...emloft.net>, Eric Dumazet
<edumazet@...gle.com>, Paolo Abeni <pabeni@...hat.com>,
Simon Horman <horms@...nel.org>, kerneljasonxing@...il.com,
lance.yang@...ux.dev, jiayuan.chen@...ux.dev, linux-kernel@...r.kernel.org,
linux-trace-kernel@...r.kernel.org, Leon Huang Fu <leon.huangfu@...pee.com>,
Dragos Tatulea <dtatulea@...dia.com>,
kernel-team <kernel-team@...udflare.com>, Yan Zhai <yan@...udflare.com>
Subject: Re: [PATCH net-next v3] page_pool: Add page_pool_release_stalled
tracepoint
On 04/01/2026 17.43, Jakub Kicinski wrote:
> On Fri, 2 Jan 2026 12:43:46 +0100 Jesper Dangaard Brouer wrote:
>> On 02/01/2026 08.17, Leon Hwang wrote:
>>> Introduce a new tracepoint to track stalled page pool releases,
>>> providing better observability for page pool lifecycle issues.
>>
>> In general I like/support adding this tracepoint for "debugability" of
>> page pool lifecycle issues.
>>
>> For "observability" @Kuba added a netlink scheme[1][2] for page_pool[3],
>> which gives us the ability to get events and list page_pools from userspace.
>> I've not used this myself (yet) so I need input from others if this is
>> something that others have been using for page pool lifecycle issues?
>
> My input here is the least valuable (since one may expect the person
> who added the code uses it) - but FWIW yes, we do use the PP stats to
> monitor PP lifecycle issues at Meta. That said - we only monitor for
> accumulation of leaked memory from orphaned pages, as the whole reason
> for adding this code was that in practice the page may be sitting in
> a socket rx queue (or defer free queue etc.) IOW a PP which is not
> getting destroyed for a long time is not necessarily a kernel issue.
>
>> Need input from @Kuba/others as the "page-pool-get"[4] state that "Only
>> Page Pools associated with a net_device can be listed". Don't we want
>> the ability to list "invisible" page_pool's to allow debugging issues?
>>
>> [1] https://docs.kernel.org/userspace-api/netlink/intro-specs.html
>> [2] https://docs.kernel.org/userspace-api/netlink/index.html
>> [3] https://docs.kernel.org/netlink/specs/netdev.html
>> [4] https://docs.kernel.org/netlink/specs/netdev.html#page-pool-get
>
> The documentation should probably be updated :(
> I think what I meant is that most _drivers_ didn't link their PP to the
> netdev via params when the API was added. So if the user doesn't see the
> page pools - the driver is probably not well maintained.
>
> In practice only page pools which are not accessible / visible via the
> API are page pools from already destroyed network namespaces (assuming
> their netdevs were also destroyed and not re-parented to init_net).
> Which I'd think is a rare case?
>
>> Looking at the code, I see that NETDEV_CMD_PAGE_POOL_CHANGE_NTF netlink
>> notification is only generated once (in page_pool_destroy) and not when
>> we retry in page_pool_release_retry (like this patch). In that sense,
>> this patch/tracepoint is catching something more than netlink provides.
>> First I though we could add a netlink notification, but I can imagine
>> cases this could generate too many netlink messages e.g. a netdev with
>> 128 RX queues generating these every second for every RX queue.
>
> FWIW yes, we can add more notifications. Tho, as I mentioned at the
> start of my reply - the expectation is that page pools waiting for
> a long time to be destroyed is something that _will_ happen in
> production.
>
>> Guess, I've talked myself into liking this change, what do other
>> maintainers think? (e.g. netlink scheme and debugging balance)
>
> We added the Netlink API to mute the pr_warn() in all practical cases.
> If Xiang Mei is seeing the pr_warn() I think we should start by asking
> what kernel and driver they are using, and what the usage pattern is :(
> As I mentioned most commonly the pr_warn() will trigger because driver
> doesn't link the pp to a netdev.
The commit that introduced this be0096676e23 ("net: page_pool: mute the
periodic warning for visible page pools") (Author: Jakub Kicinski) was
added in kernel v6.8. Our fleet runs 6.12.
Looking at production logs I'm still seeing these messages, e.g.:
"page_pool_release_retry() stalled pool shutdown: id 322, 1 inflight
591248 sec"
Looking at one of these servers it runs kernel 6.12.59 and ice NIC driver.
I'm surprised to see these on our normal servers and also the long
period. Previously I was seeing these on k8s servers, which makes more
sense at veth interfaces are likely to be removed and easier reach the
pr_warn() (as Jakub added extra if statement checking netdev in commit
(!netdev || netdev == NET_PTR_POISON)).
An example from a k8s server have smaller stalled period, and I think it
recovered:
"page_pool_release_retry() stalled pool shutdown: id 18, 1 inflight
3020 sec"
I'm also surprised to see ice NIC driver, as previously we mostly seen
these warning on driver bnxt_en. I did manage to find some cases of the
bnxt_en driver now, but I see that the server likely have a hardware defect.
Bottom-line yes these stalled pool shutdown pr_warn() are still
happening in production.
--Jesper
Powered by blists - more mailing lists