[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACT4Y+YuM55YUT37jwRP163J7ha25cN03sZ5WqTUPkz3e43Ggw@mail.gmail.com>
Date: Thu, 16 Jan 2020 06:25:46 +0100
From: Dmitry Vyukov <dvyukov@...gle.com>
To: Cong Wang <xiyou.wangcong@...il.com>
Cc: Peter Zijlstra <peterz@...radead.org>,
Taehee Yoo <ap420073@...il.com>,
syzbot <syzbot+aaa6fa4949cc5d9b7b25@...kaller.appspotmail.com>,
Ingo Molnar <mingo@...nel.org>,
Will Deacon <will.deacon@....com>,
LKML <linux-kernel@...r.kernel.org>,
syzkaller-bugs <syzkaller-bugs@...glegroups.com>,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: BUG: MAX_LOCKDEP_CHAINS too low!
On Wed, Jan 15, 2020 at 10:53 PM Cong Wang <xiyou.wangcong@...il.com> wrote:
> > +Taehee, Cong,
> >
> > In the other thread Taehee mentioned the creation of dynamic keys for
> > net devices that was added recently and that they are subject to some
> > limits.
> > syzkaller creates lots of net devices for isolation (several dozens
> > per test process, but then these can be created and destroyed
> > periodically). I wonder if it's the root cause of the lockdep limits
> > problems?
>
> Very possibly. In current code base, there are 4 lockdep keys
> per netdev:
>
> struct lock_class_key qdisc_tx_busylock_key;
> struct lock_class_key qdisc_running_key;
> struct lock_class_key qdisc_xmit_lock_key;
> struct lock_class_key addr_list_lock_key;
>
> so the number of lockdep keys is at least 4x number of network
> devices.
And these are not freed/reused, right? So with dynamic keys LOCKDEP
inherently can't handle prolonged running, only O(1) work?
> I think only addr_list_lock_key is necessary as it has a nested
> locking use case, all the rest are not. Taehee, do you agree?
>
> I plan to remove at least qdisc_xmit_lock_key for net-next
> after the fix for net gets merged.
Powered by blists - more mailing lists