lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87ftdypyec.fsf@nanos.tec.linutronix.de>
Date:   Mon, 23 Mar 2020 22:14:03 +0100
From:   Thomas Gleixner <tglx@...utronix.de>
To:     Cong Wang <xiyou.wangcong@...il.com>
Cc:     syzbot <syzbot+46f513c3033d592409d2@...kaller.appspotmail.com>,
        David Miller <davem@...emloft.net>,
        Jamal Hadi Salim <jhs@...atatu.com>,
        Jiri Pirko <jiri@...nulli.us>,
        Jakub Kicinski <kuba@...nel.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Linux Kernel Network Developers <netdev@...r.kernel.org>,
        syzkaller-bugs <syzkaller-bugs@...glegroups.com>
Subject: Re: WARNING: ODEBUG bug in tcindex_destroy_work (3)

Cong Wang <xiyou.wangcong@...il.com> writes:
> On Sat, Mar 21, 2020 at 3:19 AM Thomas Gleixner <tglx@...utronix.de> wrote:
>> > ------------[ cut here ]------------
>> > ODEBUG: free active (active state 0) object type: work_struct hint: tcindex_destroy_rexts_work+0x0/0x20 net/sched/cls_tcindex.c:143
>> ...
>> >  __debug_check_no_obj_freed lib/debugobjects.c:967 [inline]
>> >  debug_check_no_obj_freed+0x2e1/0x445 lib/debugobjects.c:998
>> >  kfree+0xf6/0x2b0 mm/slab.c:3756
>> >  tcindex_destroy_work+0x2e/0x70 net/sched/cls_tcindex.c:231
>>
>> So this is:
>>
>>         kfree(p->perfect);
>>
>> Looking at the place which queues that work:
>>
>> tcindex_destroy()
>>
>>    if (p->perfect) {
>>         if (tcf_exts_get_net(&r->exts))
>>             tcf_queue_work(&r-rwork, tcindex_destroy_rexts_work);
>>         else
>>             __tcindex_destroy_rexts(r)
>>    }
>>
>>    .....
>>
>>    tcf_queue_work(&p->rwork, tcindex_destroy_work);
>>
>> So obviously if tcindex_destroy_work() runs before
>> tcindex_destroy_rexts_work() then the above happens.
>
> We use an ordered workqueue for tc filters, so these two
> works are executed in the same order as they are queued.

The workqueue is ordered, but look how the work is queued on the work
queue:

tcf_queue_work()
  queue_rcu_work()
    call_rcu(&rwork->rcu, rcu_work_rcufn);

So after the grace period elapses rcu_work_rcufn() queues it in the
actual work queue.

Now tcindex_destroy() is invoked via tcf_proto_destroy() which can be
invoked from preemtible context. Now assume the following:

CPU0
  tcf_queue_work()
    tcf_queue_work(&r->rwork, tcindex_destroy_rexts_work);

-> Migration

CPU1
   tcf_queue_work(&p->rwork, tcindex_destroy_work);

So your RCU callbacks can be placed on different CPUs which obviously
has no ordering guarantee at all. See also:

  https://mirrors.edge.kernel.org/pub/linux/kernel/people/paulmck/Answers/RCU/RCUCBordering.html

Disabling preemption would "fix" it today, but that documentation
explicitely says that it is an implementation detail, but not
guaranteed by design.

Thanks,

        tglx

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ