[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20230110174702.72fe74c7@kernel.org>
Date: Tue, 10 Jan 2023 17:47:02 -0800
From: Jakub Kicinski <kuba@...nel.org>
To: syzbot <syzbot+d94d214ea473e218fc89@...kaller.appspotmail.com>
Cc: acme@...nel.org, alexander.shishkin@...ux.intel.com,
bpf@...r.kernel.org, jolsa@...nel.org,
linux-kernel@...r.kernel.org, linux-perf-users@...r.kernel.org,
mark.rutland@....com, mingo@...hat.com, namhyung@...nel.org,
netdev@...r.kernel.org, peterz@...radead.org,
syzkaller-bugs@...glegroups.com
Subject: Re: [syzbot] WARNING: locking bug in __perf_event_task_sched_in (2)
On Tue, 10 Jan 2023 12:50:48 -0800 syzbot wrote:
> <TASK>
> lock_acquire kernel/locking/lockdep.c:5668 [inline]
> lock_acquire+0x1e3/0x630 kernel/locking/lockdep.c:5633
> rcu_lock_acquire include/linux/rcupdate.h:325 [inline]
> rcu_read_lock include/linux/rcupdate.h:764 [inline]
> perf_event_context_sched_in kernel/events/core.c:3913 [inline]
> __perf_event_task_sched_in+0xe2/0x6c0 kernel/events/core.c:3980
> perf_event_task_sched_in include/linux/perf_event.h:1328 [inline]
> finish_task_switch.isra.0+0x5e5/0xc80 kernel/sched/core.c:5118
> context_switch kernel/sched/core.c:5247 [inline]
> __schedule+0xb92/0x5450 kernel/sched/core.c:6555
> schedule+0xde/0x1b0 kernel/sched/core.c:6631
> schedule_preempt_disabled+0x13/0x20 kernel/sched/core.c:6690
> __mutex_lock_common kernel/locking/mutex.c:679 [inline]
> __mutex_lock+0xa48/0x1360 kernel/locking/mutex.c:747
> devl_lock net/devlink/core.c:54 [inline]
> devlink_pernet_pre_exit+0x10a/0x220 net/devlink/core.c:301
> ops_pre_exit_list net/core/net_namespace.c:159 [inline]
> cleanup_net+0x455/0xb10 net/core/net_namespace.c:594
> process_one_work+0x9bf/0x1710 kernel/workqueue.c:2289
> worker_thread+0x669/0x1090 kernel/workqueue.c:2436
> kthread+0x2e8/0x3a0 kernel/kthread.c:376
> ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:308
> </TASK>
Yes, I pooped it. We need to keep the mutex around as well as
the devlink instance memory, otherwise locked screams.
Fix building..
Powered by blists - more mailing lists