[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHS8izNSG_fC7t3cAaN0s3W2Mo-7J2UW8c-OxPSpdeuvK-mxxw@mail.gmail.com>
Date: Wed, 12 Feb 2025 11:24:31 -0800
From: Mina Almasry <almasrymina@...gle.com>
To: Jason Xing <kerneljasonxing@...il.com>
Cc: davem@...emloft.net, edumazet@...gle.com, kuba@...nel.org,
pabeni@...hat.com, hawk@...nel.org, ilias.apalodimas@...aro.org,
horms@...nel.org, netdev@...r.kernel.org
Subject: Re: [PATCH net-next v1] page_pool: avoid infinite loop to schedule
delayed worker
On Tue, Feb 11, 2025 at 7:14 PM Jason Xing <kerneljasonxing@...il.com> wrote:
>
> On Wed, Feb 12, 2025 at 10:37 AM Mina Almasry <almasrymina@...gle.com> wrote:
> >
> > On Mon, Feb 10, 2025 at 5:10 AM Jason Xing <kerneljasonxing@...il.com> wrote:
> > >
> > > If the buggy driver causes the inflight less than 0 [1] and warns
> >
> > How does a buggy driver trigger this?
>
> We're still reproducing and investigating. With a certain version of
> driver + XDP installed, we have a very slight chance to see this
> happening.
>
> >
> > > us in page_pool_inflight(), it means we should not expect the
> > > whole page_pool to get back to work normally.
> > >
> > > We noticed the kworker is waken up repeatedly and infinitely[1]
> > > in production. If the page pool detect the error happening,
> > > probably letting it go is a better way and do not flood the
> > > var log messages. This patch mitigates the adverse effect.
> > >
> > > [1]
> > > [Mon Feb 10 20:36:11 2025] ------------[ cut here ]------------
> > > [Mon Feb 10 20:36:11 2025] Negative(-51446) inflight packet-pages
> > > ...
> > > [Mon Feb 10 20:36:11 2025] Call Trace:
> > > [Mon Feb 10 20:36:11 2025] page_pool_release_retry+0x23/0x70
> > > [Mon Feb 10 20:36:11 2025] process_one_work+0x1b1/0x370
> > > [Mon Feb 10 20:36:11 2025] worker_thread+0x37/0x3a0
> > > [Mon Feb 10 20:36:11 2025] kthread+0x11a/0x140
> > > [Mon Feb 10 20:36:11 2025] ? process_one_work+0x370/0x370
> > > [Mon Feb 10 20:36:11 2025] ? __kthread_cancel_work+0x40/0x40
> > > [Mon Feb 10 20:36:11 2025] ret_from_fork+0x35/0x40
> > > [Mon Feb 10 20:36:11 2025] ---[ end trace ebffe800f33e7e34 ]---
> > >
> > > Signed-off-by: Jason Xing <kerneljasonxing@...il.com>
> > > ---
> > > net/core/page_pool.c | 2 +-
> > > 1 file changed, 1 insertion(+), 1 deletion(-)
> > >
> > > diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> > > index 1c6fec08bc43..8e9f5801aabb 100644
> > > --- a/net/core/page_pool.c
> > > +++ b/net/core/page_pool.c
> > > @@ -1167,7 +1167,7 @@ void page_pool_destroy(struct page_pool *pool)
> > > page_pool_disable_direct_recycling(pool);
> > > page_pool_free_frag(pool);
> > >
> > > - if (!page_pool_release(pool))
> > > + if (page_pool_release(pool) <= 0)
> > > return;
> >
> > Isn't it the condition in page_pool_release_retry() that you want. to
> > modify? That is the one that handles whether the worker keeps spinning
> > no?
>
> Right, do you mean this patch?
> diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> index 8e9f5801aabb..7dde3bd5f275 100644
> --- a/net/core/page_pool.c
> +++ b/net/core/page_pool.c
> @@ -1112,7 +1112,7 @@ static void page_pool_release_retry(struct
> work_struct *wq)
> int inflight;
>
> inflight = page_pool_release(pool);
> - if (!inflight)
> + if (inflight < 0)
> return;
>
> It has the same behaviour as the current patch does. I thought we
> could stop it earlier.
>
Yes I mean this.
> >
> > I also wonder also whether if the check in page_pool_release() itself
> > needs to be:
> >
> > if (inflight < 0)
> > __page_pool_destroy();
> >
> > otherwise the pool will never be destroyed no?
>
> I'm worried this would have a more severe impact because it's
> uncertain if in this case the page pool can be released? :(
>
Makes sense indeed. We can't be sure if the page_pool has already been
destroyed if inflight < 0. Ignore the earlier suggestion from me,
thanks.
--
Thanks,
Mina
Powered by blists - more mailing lists