[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CANn89iLmhzrbuWu0xp-+yhy64UVbO9fN45y3D-D-OMWnB-+OEQ@mail.gmail.com>
Date: Mon, 8 Jul 2024 13:53:19 -0700
From: Eric Dumazet <edumazet@...gle.com>
To: Kuniyuki Iwashima <kuniyu@...zon.com>
Cc: davem@...emloft.net, dsahern@...nel.org, kuba@...nel.org,
kuni1840@...il.com, netdev@...r.kernel.org, pabeni@...hat.com,
syzkaller@...glegroups.com, willemdebruijn.kernel@...il.com
Subject: Re: [PATCH v1 net] udp: Set SOCK_RCU_FREE earlier in udp_lib_get_port().
On Mon, Jul 8, 2024 at 12:20 PM Kuniyuki Iwashima <kuniyu@...zon.com> wrote:
>
> From: Eric Dumazet <edumazet@...gle.com>
> Date: Mon, 8 Jul 2024 12:07:56 -0700
> > On Mon, Jul 8, 2024 at 11:55 AM Kuniyuki Iwashima <kuniyu@...zon.com> wrote:
> > >
> > > From: Eric Dumazet <edumazet@...gle.com>
> > > Date: Mon, 8 Jul 2024 11:38:41 -0700
> > > > On Mon, Jul 8, 2024 at 11:20 AM Kuniyuki Iwashima <kuniyu@...zon.com> wrote:
> > > > >
> > > > > syzkaller triggered the warning [0] in udp_v4_early_demux().
> > > > >
> > > > > In udp_v4_early_demux(), we do not touch the refcount of the looked-up
> > > > > sk and use sock_pfree() as skb->destructor, so we check SOCK_RCU_FREE
> > > > > to ensure that the sk is safe to access during the RCU grace period.
> > > > >
> > > > > Currently, SOCK_RCU_FREE is flagged for a bound socket after being put
> > > > > into the hash table. Moreover, the SOCK_RCU_FREE check is done too
> > > > > early in udp_v4_early_demux(), so there could be a small race window:
> > > > >
> > > > > CPU1 CPU2
> > > > > ---- ----
> > > > > udp_v4_early_demux() udp_lib_get_port()
> > > > > | |- hlist_add_head_rcu()
> > > > > |- sk = __udp4_lib_demux_lookup() |
> > > > > |- DEBUG_NET_WARN_ON_ONCE(sk_is_refcounted(sk));
> > > > > `- sock_set_flag(sk, SOCK_RCU_FREE)
> > > > >
> > > > > In practice, sock_pfree() is called much later, when SOCK_RCU_FREE
> > > > > is most likely propagated for other CPUs; otherwise, we will see
> > > > > another warning of sk refcount underflow, but at least I didn't.
> > > > >
> > > > > Technically, moving sock_set_flag(sk, SOCK_RCU_FREE) before
> > > > > hlist_add_{head,tail}_rcu() does not guarantee the order, and we
> > > > > must put smp_mb() between them, or smp_wmb() there and smp_rmb()
> > > > > in udp_v4_early_demux().
> > > > >
> > > > > But it's overkill in the real scenario, so I just put smp_mb() only under
> > > > > CONFIG_DEBUG_NET to silence the splat. When we see the refcount underflow
> > > > > warning, we can remove the config guard.
> > > > >
> > > > > Another option would be to remove DEBUG_NET_WARN_ON_ONCE(), but this could
> > > > > make future debugging harder without the hints in udp_v4_early_demux() and
> > > > > udp_lib_get_port().
> > > > >
> > > > > [0]:
> > > > >
> > > > > Fixes: 08842c43d016 ("udp: no longer touch sk->sk_refcnt in early demux")
> > > > > Reported-by: syzkaller <syzkaller@...glegroups.com>
> > > > > Signed-off-by: Kuniyuki Iwashima <kuniyu@...zon.com>
> > > > > ---
> > > > > net/ipv4/udp.c | 8 +++++++-
> > > > > 1 file changed, 7 insertions(+), 1 deletion(-)
> > > > >
> > > > > diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
> > > > > index 189c9113fe9a..1a05cc3d2b4f 100644
> > > > > --- a/net/ipv4/udp.c
> > > > > +++ b/net/ipv4/udp.c
> > > > > @@ -326,6 +326,12 @@ int udp_lib_get_port(struct sock *sk, unsigned short snum,
> > > > > goto fail_unlock;
> > > > > }
> > > > >
> > > > > + sock_set_flag(sk, SOCK_RCU_FREE);
> > > >
> > > > Nice catch.
> > > >
> > > > > +
> > > > > + if (IS_ENABLED(CONFIG_DEBUG_NET))
> > > > > + /* for DEBUG_NET_WARN_ON_ONCE() in udp_v4_early_demux(). */
> > > > > + smp_mb();
> > > > > +
> > > >
> > > > I do not think this smp_mb() is needed. If this was, many other RCU
> > > > operations would need it,
> > > >
> > > > RCU rules mandate that all memory writes must be committed before the
> > > > object can be seen
> > > > by other cpus in the hash table.
> > > >
> > > > This includes the setting of the SOCK_RCU_FREE flag.
> > > >
> > > > For instance, hlist_add_head_rcu() does a
> > > > rcu_assign_pointer(hlist_first_rcu(h), n);
> > >
> > > Ah, I was thinking spinlock will not prevent reordering, but
> > > now I see, rcu_assign_pointer() had necessary barrier. :)
> > >
> > > /**
> > > * rcu_assign_pointer() - assign to RCU-protected pointer
> > > ...
> > > * Assigns the specified value to the specified RCU-protected
> > > * pointer, ensuring that any concurrent RCU readers will see
> > > * any prior initialization.
> > >
> > > will remove smp_mb() and update the changelog in v2.
> > >
> >
> > A similar commit was
> >
> > commit 871019b22d1bcc9fab2d1feba1b9a564acbb6e99
> > Author: Stanislav Fomichev <sdf@...ichev.me>
> > Date: Wed Nov 8 13:13:25 2023 -0800
> >
> > net: set SOCK_RCU_FREE before inserting socket into hashtable
> >
> > So I wonder if the bug could be older...
>
> If we focus on the ordering, the Fixes tag would be
>
> Fixes: ca065d0cf80f ("udp: no longer use SLAB_DESTROY_BY_RCU")
>
> But, at that time, we had atomic_inc_not_zero_hint() and used
> sock_efree(), which were removed later in 08842c43d016.
>
> Which one should I use as Fixes: ?
I think the older issue might only surface with eBPF users.
commit 6acc9b432e6714d72d7d77ec7c27f6f8358d0c71
Author: Joe Stringer <joe@...d.net.nz>
Date: Tue Oct 2 13:35:36 2018 -0700
bpf: Add helper to retrieve socket in BPF
The effect of the bug would be an UDP socket leak (because an
atomic_inc_not_zero_hint() could be used
before SOCK_RCU_FREE has been set. Then the refcount decrement would
be avoided if the SOCK_RCU_FREE was set before it)
08842c43d016 ("udp: no longer touch sk->sk_refcnt in early demux")
added a DEBUG_NET_WARN_ON_ONCE()
which made the bug visible.
Powered by blists - more mailing lists