[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <67dc1d3d60383_a04b129455@willemb.c.googlers.com.notmuch>
Date: Thu, 20 Mar 2025 09:50:53 -0400
From: Willem de Bruijn <willemdebruijn.kernel@...il.com>
To: Paolo Abeni <pabeni@...hat.com>,
Willem de Bruijn <willemdebruijn.kernel@...il.com>,
netdev@...r.kernel.org
Cc: "David S. Miller" <davem@...emloft.net>,
David Ahern <dsahern@...nel.org>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>,
Simon Horman <horms@...nel.org>,
Willem de Bruijn <willemb@...gle.com>,
steffen.klassert@...unet.com
Subject: Re: [PATCH net-next] udp_tunnel: properly deal with xfrm gro encap.
Paolo Abeni wrote:
> On 3/19/25 6:44 PM, Willem de Bruijn wrote:
> > Paolo Abeni wrote:
> >> Given syzkaller has found another splat with no reproducer on the other
> >> UDP GRO change of mine [1] and we are almost at merge window time, I'm
> >> considering reverting entirely such changes and re-submit later
> >> (hopefully fixed). WDYT?
> >
> > Your call. I suspect that we can forward fix this. But yes, that is
> > always the riskier approach. And from a first quick look at the
> > report, the right fix is not immediately glaringly obvious indeed.
>
> One problem to me is that I have my hands significantly full, since the
> revert looks like the faster way out it looks the more appealing
> candidate to me.
>
> WRT the other issue, I think the problem is in udp_tunnel_cleanup_gro();
> this check:
>
> if (!up->tunnel_list.pprev)
> return;
>
> at sk deletion time is performed outside any lock. The current CPU could
> see the old list value (empty) even if another core previously added the
> sk into the UDP tunnel GRO list, thus skipping the removal from such
> list. Later list operation will do UaF while touching the same list.
>
> Moving the check under the udp_tunnel_gro_lock spinlock should solve the
> issue.
FWIW, the full revert doesn't have to happen in the net-next timeframe.
Can always revert to that in -rcX. And try a forward fix first, after
the merge is over.
Powered by blists - more mailing lists