[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAM_iQpUu6524ZyZDBu=nkuhpubyGBTHEJ-HK8qrpCW=EEKGujw@mail.gmail.com>
Date: Mon, 30 Mar 2020 16:24:42 -0700
From: Cong Wang <xiyou.wangcong@...il.com>
To: "Paul E . McKenney" <paulmck@...nel.org>
Cc: Linux Kernel Network Developers <netdev@...r.kernel.org>,
syzbot <syzbot+46f513c3033d592409d2@...kaller.appspotmail.com>,
Thomas Gleixner <tglx@...utronix.de>,
Jamal Hadi Salim <jhs@...atatu.com>,
Jiri Pirko <jiri@...nulli.us>
Subject: Re: [Patch net] net_sched: add a temporary refcnt for struct tcindex_data
On Mon, Mar 30, 2020 at 2:35 PM Paul E. McKenney <paulmck@...nel.org> wrote:
>
> On Sat, Mar 28, 2020 at 12:12:59PM -0700, Cong Wang wrote:
> > Although we intentionally use an ordered workqueue for all tc
> > filter works, the ordering is not guaranteed by RCU work,
> > given that tcf_queue_work() is esstenially a call_rcu().
> >
> > This problem is demostrated by Thomas:
> >
> > CPU 0:
> > tcf_queue_work()
> > tcf_queue_work(&r->rwork, tcindex_destroy_rexts_work);
> >
> > -> Migration to CPU 1
> >
> > CPU 1:
> > tcf_queue_work(&p->rwork, tcindex_destroy_work);
> >
> > so the 2nd work could be queued before the 1st one, which leads
> > to a free-after-free.
> >
> > Enforcing this order in RCU work is hard as it requires to change
> > RCU code too. Fortunately we can workaround this problem in tcindex
> > filter by taking a temporary refcnt, we only refcnt it right before
> > we begin to destroy it. This simplifies the code a lot as a full
> > refcnt requires much more changes in tcindex_set_parms().
> >
> > Reported-by: syzbot+46f513c3033d592409d2@...kaller.appspotmail.com
> > Fixes: 3d210534cc93 ("net_sched: fix a race condition in tcindex_destroy()")
> > Cc: Thomas Gleixner <tglx@...utronix.de>
> > Cc: Paul E. McKenney <paulmck@...nel.org>
> > Cc: Jamal Hadi Salim <jhs@...atatu.com>
> > Cc: Jiri Pirko <jiri@...nulli.us>
> > Signed-off-by: Cong Wang <xiyou.wangcong@...il.com>
>
> Looks plausible, but what did you do to verify that the structures
> were in fact being freed? See below for more detail.
I ran the syzbot reproducer for about 20 minutes, there was no
memory leak reported after scanning.
Thanks.
Powered by blists - more mailing lists