[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAM_iQpVtcNFeEtW15z_nZoyC1Q-_pCq+UfZ4vYBB3Lb2CMm4Mg@mail.gmail.com>
Date: Wed, 15 Jan 2020 13:53:28 -0800
From: Cong Wang <xiyou.wangcong@...il.com>
To: Dmitry Vyukov <dvyukov@...gle.com>
Cc: Peter Zijlstra <peterz@...radead.org>,
Taehee Yoo <ap420073@...il.com>,
syzbot <syzbot+aaa6fa4949cc5d9b7b25@...kaller.appspotmail.com>,
Ingo Molnar <mingo@...nel.org>,
Will Deacon <will.deacon@....com>,
LKML <linux-kernel@...r.kernel.org>,
syzkaller-bugs <syzkaller-bugs@...glegroups.com>,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: BUG: MAX_LOCKDEP_CHAINS too low!
On Mon, Jan 13, 2020 at 3:11 AM Dmitry Vyukov <dvyukov@...gle.com> wrote:
> +Taehee, Cong,
>
> In the other thread Taehee mentioned the creation of dynamic keys for
> net devices that was added recently and that they are subject to some
> limits.
> syzkaller creates lots of net devices for isolation (several dozens
> per test process, but then these can be created and destroyed
> periodically). I wonder if it's the root cause of the lockdep limits
> problems?
Very possibly. In current code base, there are 4 lockdep keys
per netdev:
struct lock_class_key qdisc_tx_busylock_key;
struct lock_class_key qdisc_running_key;
struct lock_class_key qdisc_xmit_lock_key;
struct lock_class_key addr_list_lock_key;
so the number of lockdep keys is at least 4x number of network
devices.
I think only addr_list_lock_key is necessary as it has a nested
locking use case, all the rest are not. Taehee, do you agree?
I plan to remove at least qdisc_xmit_lock_key for net-next
after the fix for net gets merged.
Thanks!
Powered by blists - more mailing lists