[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <515a7435bd83ecc8a9d63306d4bc076c762f22bf.camel@sipsolutions.net>
Date: Wed, 13 Sep 2023 20:11:39 +0200
From: Johannes Berg <johannes@...solutions.net>
To: Guenter Roeck <linux@...ck-us.net>, Tetsuo Handa
<penguin-kernel@...ove.sakura.ne.jp>
Cc: Lai Jiangshan <jiangshanlai@...il.com>, Tejun Heo <tj@...nel.org>, Hillf
Danton <hdanton@...a.com>, LKML <linux-kernel@...r.kernel.org>, Heyi Guo
<guoheyi@...ux.alibaba.com>, netdev@...r.kernel.org
Subject: Re: [PATCH v3] workqueue: don't skip lockdep work dependency in
cancel_work_sync()
On Wed, 2023-09-13 at 08:59 -0700, Guenter Roeck wrote:
>
> So you are saying that anything running in a workqueue must not
> acquire rtnl_lock because cancel_[delayed_]work_sync() may be called
> under rtnl_lock.
No no, sorry if I wasn't clear. I mean this particular function / work
struct cannot acquire the RTNL because the cancel _for it_ is called
under RTNL.
It used to be that this was also tied to the entire workqueue, but this
is no longer true due to the way workqueues work these days.
> FWIW, it would be nice if the lockdep code would generate some other
> message in this situation. Complaining about a deadlock involving a
> lock that doesn't exist if lock debugging isn't enabled is not really
> helpful and, yes, may result in reporters to falsely assume that this
> lock is responsible for the potential deadlock.
Well, I don't know of any way to tell lockdep that, but I guess ideas
welcome? I mean, I'm not even sure what else it would tell you, other
than that you have a deadlock?
I mean, OK, I guess it's fair - it says "acquire lock" when it says
[ 9.810406] ip/357 is trying to acquire lock:
[ 9.810501] 83af6c40 ((work_completion)(&(&dev->state_queue)->work)){+.+.}-{0:0}, at: __flush_work+0x40/0x550
and it's not really a lock, but I'm not even sure how to phrase it
better? Note the scenario may be more complex than here.
I mean, perhaps we could add an optional message somehow and it could
say
"ip/357 is waiting for the work:"
but then we'd also have to update the scenario message to something like
[ 9.813938] CPU0 CPU1
[ 9.813999] ---- ----
[ 9.814062] lock(rtnl_mutex);
[ 9.814139] run((work_completion)(&(&dev->state_queue)->work));
[ 9.814258] lock(rtnl_mutex);
[ 9.814354] wait((work_completion)(&(&dev->state_queue)->work));
which is really hard to do because how should lockdep know that the two
ways of "acquiring the lock" are actually different, and which one is
which? I'm not even convinced it could really do that.
In any case, I'd rather have a bug report from this than not, even if
it's not trivial to read.
... and here I thought we went through all of this 15+ years ago when I
added it in commit 4e6045f13478 ("workqueue: debug flushing deadlocks
with lockdep"), at which time the situation was actually worse because
you had to not only pay attention to the work struct, but also the
entire workqueue - which is today only true for ordered workqueues... Oh
well :)
johannes
Powered by blists - more mailing lists