[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230719203030.1296596a@kernel.org>
Date: Wed, 19 Jul 2023 20:30:30 -0700
From: Jakub Kicinski <kuba@...nel.org>
To: netdev@...r.kernel.org
Cc: Florian Westphal <fw@...len.de>,
Aleksandr Nogikh <nogikh@...gle.com>,
syzbot <syzbot+9bbbacfbf1e04d5221f7@...kaller.appspotmail.com>,
dsterba@...e.cz, bakmitopiacibubur@...a.indosterling.com,
clm@...com, davem@...emloft.net, dsahern@...nel.org,
dsterba@...e.com, gregkh@...uxfoundation.org, jirislaby@...nel.org,
josef@...icpanda.com, kadlec@...filter.org,
linux-btrfs@...r.kernel.org, linux-fsdevel@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-serial@...r.kernel.org,
linux@...linux.org.uk, netfilter-devel@...r.kernel.org,
pablo@...filter.org, syzkaller-bugs@...glegroups.com
Subject: Re: [syzbot] [btrfs?] [netfilter?] BUG: MAX_LOCKDEP_CHAIN_HLOCKS
too low! (2)
On Thu, 20 Jul 2023 01:12:07 +0200 Florian Westphal wrote:
> I don't see any netfilter involvement here.
>
> The repro just creates a massive amount of team devices.
>
> At the time it hits the LOCKDEP limits on my test vm it has
> created ~2k team devices, system load is at +14 because udev
> is also busy spawing hotplug scripts for the new devices.
>
> After reboot and suspending the running reproducer after about 1500
> devices (before hitting lockdep limits), followed by 'ip link del' for
> the team devices gets the lockdep entries down to ~8k (from 40k),
> which is in the range that it has on this VM after a fresh boot.
>
> So as far as I can see this workload is just pushing lockdep
> past what it can handle with the configured settings and is
> not triggering any actual bug.
The lockdep splat because of netdevice stacking is one of our top
reports from syzbot. Is anyone else feeling like we should add
an artificial but very high limit on netdev stacking? :(
Powered by blists - more mailing lists