[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230417163210.2433ae40@kernel.org>
Date: Mon, 17 Apr 2023 16:32:10 -0700
From: Jakub Kicinski <kuba@...nel.org>
To: Lorenzo Bianconi <lorenzo@...nel.org>,
Eric Dumazet <edumazet@...gle.com>
Cc: netdev@...r.kernel.org, hawk@...nel.org,
ilias.apalodimas@...aro.org, davem@...emloft.net,
pabeni@...hat.com, bpf@...r.kernel.org,
lorenzo.bianconi@...hat.com, nbd@....name
Subject: Re: issue with inflight pages from page_pool
On Mon, 17 Apr 2023 23:31:01 +0200 Lorenzo Bianconi wrote:
> > If it's that then I'm with Eric. There are many ways to keep the pages
> > in use, no point working around one of them and not the rest :(
>
> I was not clear here, my fault. What I mean is I can see the returned
> pages counter increasing from time to time, but during most of tests,
> even after 2h the tcp traffic has stopped, page_pool_release_retry()
> still complains not all the pages are returned to the pool and so the
> pool has not been deallocated yet.
> The chunk of code in my first email is just to demonstrate the issue
> and I am completely fine to get a better solution :)
Your problem is perhaps made worse by threaded NAPI, you have
defer-free skbs sprayed across all cores and no NAPI there to
flush them :(
> I guess we just need a way to free the pool in a reasonable amount
> of time. Agree?
Whether we need to guarantee the release is the real question.
Maybe it's more of a false-positive warning.
Flushing the defer list is probably fine as a hack, but it's not
a full fix as Eric explained. False positive can still happen.
I'm ambivalent. My only real request wold be to make the flushing
a helper in net/core/dev.c rather than open coded in page_pool.c.
Somewhat related - Eric, do we need to handle defer_list in dev_cpu_dead()?
Powered by blists - more mailing lists