[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <md46ky57c74xrw2l2y5biwnw4vzgn6juiovqkx7tzdwks6smab@vpfd5hmclioa>
Date: Fri, 4 Jul 2025 20:14:20 +0000
From: Dragos Tatulea <dtatulea@...dia.com>
To: Chris Arges <carges@...udflare.com>, netdev@...r.kernel.org,
bpf@...r.kernel.org
Cc: kernel-team <kernel-team@...udflare.com>,
Jesper Dangaard Brouer <hawk@...nel.org>, tariqt@...dia.com, saeedm@...dia.com,
Leon Romanovsky <leon@...nel.org>, Andrew Lunn <andrew+netdev@...n.ch>,
"David S. Miller" <davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>,
Alexei Starovoitov <ast@...nel.org>, Daniel Borkmann <daniel@...earbox.net>,
John Fastabend <john.fastabend@...il.com>, Simon Horman <horms@...nel.org>,
Andrew Rzeznik <arzeznik@...udflare.com>, Yan Zhai <yan@...udflare.com>
Subject: Re: [BUG] mlx5_core memory management issue
On Fri, Jul 04, 2025 at 12:37:36PM +0000, Dragos Tatulea wrote:
> On Thu, Jul 03, 2025 at 10:49:20AM -0500, Chris Arges wrote:
> > When running iperf through a set of XDP programs we were able to crash
> > machines with NICs using the mlx5_core driver. We were able to confirm
> > that other NICs/drivers did not exhibit the same problem, and suspect
> > this could be a memory management issue in the driver code.
> > Specifically we found a WARNING at include/net/page_pool/helpers.h:277
> > mlx5e_page_release_fragmented.isra. We are able to demonstrate this
> > issue in production using hardware, but cannot easily bisect because
> > we don’t have a simple reproducer.
> >
> Thanks for the report! We will investigate.
>
> > I wanted to share stack traces in
> > order to help us further debug and understand if anyone else has run
> > into this issue. We are currently working on getting more crashdumps
> > and doing further analysis.
> >
> >
> > The test setup looks like the following:
> > ┌─────┐
> > │mlx5 │
> > │NIC │
> > └──┬──┘
> > │xdp ebpf program (does encap and XDP_TX)
> > │
> > ▼
> > ┌──────────────────────┐
> > │xdp.frags │
> > │ │
> > └──┬───────────────────┘
> > │tailcall
> > │BPF_REDIRECT_MAP (using CPUMAP bpf type)
> > ▼
> > ┌──────────────────────┐
> > │xdp.frags/cpumap │
> > │ │
> > └──┬───────────────────┘
> > │BPF_REDIRECT to veth (*potential trigger for issue)
> > │
> > ▼
> > ┌──────┐
> > │veth │
> > │ │
> > └──┬───┘
> > │
> > │
> > ▼
> >
> > Here an mlx5 NIC has an xdp.frags program attached which tailcalls via
> > BPF_REDIRECT_MAP into an xdp.frags/cpumap. For our reproducer we can
> > choose a random valid CPU to reproduce the issue. Once that packet
> > reaches the xdp.frags/cpumap program we then do another BPF_REDIRECT
> > to a veth device which has an XDP program which redirects to an
> > XSKMAP. It wasn’t until we added the additional BPF_REDIRECT to the
> > veth device that we noticed this issue.
> >
> Would it be possible to try to use a single program that redirects to
> the XSKMAP and check that the issue reproduces?
>
I forgot to ask: what is the MTU size?
Also, are you setting any other special config on the device?
Thanks,
Dragos
Powered by blists - more mailing lists