lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <20231202003027.1081-1-hdanton@sina.com>
Date: Sat,  2 Dec 2023 08:30:27 +0800
From: Hillf Danton <hdanton@...a.com>
To: Eric Dumazet <edumazet@...gle.com>
Cc: xingwei lee <xrivendell7@...il.com>,
	syzbot+9ada62e1dc03fdc41982@...kaller.appspotmail.com,
	linux-kernel@...r.kernel.org,
	netdev@...r.kernel.org,
	syzkaller-bugs@...glegroups.com
Subject: Re: [syzbot] [net?] WARNING in cleanup_net (3)

On Fri, 1 Dec 2023 13:52:49 +0100 Eric Dumazet <edumazet@...gle.com>
> On Fri, Dec 1, 2023 at 12:13 PM Hillf Danton <hdanton@...a.com> wrote:
> > On Fri, 1 Dec 2023 08:39:32 +0800 xingwei lee <xrivendell7@...il.com>
> > > I forgot to CC others, repeat mail.
> > > Sorry, Dumazet. I found this bug with my modified syzkaller in my
> > > local environment.
> > > You are right, I crashed this bug about 10 times and used some
> > > heuristic solutions to increase the chances of luck with modifying
> > > syz-repro during this process.
> > > I can confirm the reproduction can trigger the bug soon and I hope it helps you.
> > > I'll test your patch and give your feedback ASAP.
> > >
> > > I apply your patch at
> > > https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=3b47bc037bd44f142ac09848e8d3ecccc726be99
> > > with a little fix:
> > >
> > > diff --git a/net/core/sock.c b/net/core/sock.c
> > > index fef349dd72fa..36d2871ac24f 100644
> > > --- a/net/core/sock.c
> > > +++ b/net/core/sock.c
> > > @@ -2197,8 +2197,6 @@ static void __sk_destruct(struct rcu_head *head)
> > >
> > >         if (likely(sk->sk_net_refcnt))
> > >                 put_net_track(sock_net(sk), &sk->ns_tracker);
> > > -       else
> > > -               __netns_tracker_free(sock_net(sk), &sk->ns_tracker, false);
> > >
> > >         sk_prot_free(sk->sk_prot_creator, sk);
> > >  }
> > > @@ -2212,6 +2210,9 @@ void sk_destruct(struct sock *sk)
> > >                 use_call_rcu = true;
> > >         }
> > >
> > > +       if (unlikely(!sk->sk_net_refcnt))
> > > +               __netns_tracker_free(sock_net(sk), &sk->ns_tracker, false);
> > > +
> > >         if (use_call_rcu)
> > >                 call_rcu(&sk->sk_rcu, __sk_destruct);
> > >         else
> > >
> > > and It's also trigger the crash like below:
> >
> > Looks like a refcount leak that could be cured with the diff below.
> > Only for thoughts.
> >
> > --- x/include/net/net_namespace.h
> > +++ y/include/net/net_namespace.h
> > @@ -320,7 +320,7 @@ static inline int check_net(const struct
> >         return 1;
> >  }
> >
> > -#define net_drop_ns NULL
> > +static void net_drop_ns(void *w) { }
> >  #endif
> >
> >
> > @@ -355,7 +355,7 @@ static inline void __netns_tracker_free(
> >  static inline struct net *get_net_track(struct net *net,
> >                                         netns_tracker *tracker, gfp_t gfp)
> >  {
> > -       get_net(net);
> > +       refcount_inc(&net->passive);
> >         netns_tracker_alloc(net, tracker, gfp);
> >         return net;
> >  }
> > @@ -363,7 +363,7 @@ static inline struct net *get_net_track(
> >  static inline void put_net_track(struct net *net, netns_tracker *tracker)
> >  {
> >         __netns_tracker_free(net, tracker, true);
> > -       put_net(net);
> > +       net_drop_ns(net);
> >  }
> >
> >  typedef struct {
> > --
> 
> I do not think so.If you saw my prior patch, my thinking was :
> 
> At netns dismantle, RDS is supposed to close all kernel sockets it created.
> 
> Because of RCU grace period imposed on TCP listeners, my concern was
> that we might have to release the sk->ns_tracker before
> the RCU grace period ended. (I think my patch makes sense anyway, I
> mentioned this race possibility in the past)
> 
> If the splat still occurs, this means that at the end of
> rds_tcp_listen_stop(), rds_tcp_listen_sock->sk refcount had not
> reached yet 0.
> 
> Therefore I think the bug is in RDS.

I suspect it is in RDS because we knew the synchronize_rcu() in
cleanup_net() failed to work [1].

[1] https://lore.kernel.org/lkml/CANn89iJj_VR0L7g3-0=aZpKbXfVo7=BG0tsb8rhiTBc4zi_EtQ@mail.gmail.com/
> 
> We could add a debug point in rds_tcp_listen_sock(), I suspect
> something in RDS got a sock_hold(sk)
> and did not release the refcount before we exit from rds_tcp_listen_stop()
> 
> Another way would be to add a tracker on sockets, but this seems a lot of work.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ