[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4zkm7dmkxhfhf3cm7eniim26z6nbp3zsm4qttapg3xbvkrqhro@cvjnbr624m5h>
Date: Wed, 13 Aug 2025 20:24:37 +0000
From: Dragos Tatulea <dtatulea@...dia.com>
To: Chris Arges <carges@...udflare.com>
Cc: Jesse Brandeburg <jbrandeburg@...udflare.com>, netdev@...r.kernel.org,
bpf@...r.kernel.org, kernel-team <kernel-team@...udflare.com>,
Jesper Dangaard Brouer <hawk@...nel.org>, tariqt@...dia.com, saeedm@...dia.com,
Leon Romanovsky <leon@...nel.org>, Andrew Lunn <andrew+netdev@...n.ch>,
"David S. Miller" <davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>,
Alexei Starovoitov <ast@...nel.org>, Daniel Borkmann <daniel@...earbox.net>,
John Fastabend <john.fastabend@...il.com>, Simon Horman <horms@...nel.org>,
Andrew Rzeznik <arzeznik@...udflare.com>, Yan Zhai <yan@...udflare.com>
Subject: Re: [BUG] mlx5_core memory management issue
On Wed, Aug 13, 2025 at 07:26:49PM +0000, Dragos Tatulea wrote:
> On Wed, Aug 13, 2025 at 01:53:48PM -0500, Chris Arges wrote:
> > On 2025-08-12 16:25:58, Chris Arges wrote:
> > > On 2025-08-12 20:19:30, Dragos Tatulea wrote:
> > > > On Tue, Aug 12, 2025 at 11:55:39AM -0700, Jesse Brandeburg wrote:
> > > > > On 8/12/25 8:44 AM, 'Dragos Tatulea' via kernel-team wrote:
> > > > >
> > > > > > diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c
> > > > > > index 482d284a1553..484216c7454d 100644
> > > > > > --- a/kernel/bpf/devmap.c
> > > > > > +++ b/kernel/bpf/devmap.c
> > > > > > @@ -408,8 +408,10 @@ static void bq_xmit_all(struct xdp_dev_bulk_queue *bq, u32 flags)
> > > > > > /* If not all frames have been transmitted, it is our
> > > > > > * responsibility to free them
> > > > > > */
> > > > > > + xdp_set_return_frame_no_direct();
> > > > > > for (i = sent; unlikely(i < to_send); i++)
> > > > > > xdp_return_frame_rx_napi(bq->q[i]);
> > > > > > + xdp_clear_return_frame_no_direct();
> > > > >
> > > > > Why can't this instead just be xdp_return_frame(bq->q[i]); with no
> > > > > "no_direct" fussing?
> > > > >
> > > > > Wouldn't this be the safest way for this function to call frame completion?
> > > > > It seems like presuming the calling context is napi is wrong?
> > > > >
> > > > It would be better indeed. Thanks for removing my horse glasses!
> > > >
> > > > Once Chris verifies that this works for him I can prepare a fix patch.
> > > >
> > > Working on that now, I'm testing a kernel with the following change:
> > >
> > > ---
> > >
> > > diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c
> > > index 3aa002a47..ef86d9e06 100644
> > > --- a/kernel/bpf/devmap.c
> > > +++ b/kernel/bpf/devmap.c
> > > @@ -409,7 +409,7 @@ static void bq_xmit_all(struct xdp_dev_bulk_queue *bq, u32 flags)
> > > * responsibility to free them
> > > */
> > > for (i = sent; unlikely(i < to_send); i++)
> > > - xdp_return_frame_rx_napi(bq->q[i]);
> > > + xdp_return_frame(bq->q[i]);
> > >
> > > out:
> > > bq->count = 0;
> >
> > This patch resolves the issue I was seeing and I am no longer able to
> > reproduce the issue. I tested for about 2 hours, when the reproducer usually
> > takes about 1-2 minutes.
> >
> Thanks! Will send a patch tomorrow and also add you in the Tested-by tag.
>
> As follow up work it would be good to have a way to catch this family of
> issues. Something in the lines of the patch below.
>
> Thanks,
> Dragos
>
> diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> index f1373756cd0f..0c498fbd8df6 100644
> --- a/net/core/page_pool.c
> +++ b/net/core/page_pool.c
> @@ -794,6 +794,10 @@ __page_pool_put_page(struct page_pool *pool, netmem_ref netmem,
> {
> lockdep_assert_no_hardirq();
>
> +#ifdef CONFIG_PAGE_POOL_CACHEDEBUG
> + WARN(page_pool_napi_local(pool), "Page pool cache access from non-direct napi context");
I meant to negate the condition here.
Thanks,
Dragos
Powered by blists - more mailing lists