[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <c6e666eb6bcd84caa0affacd7e994bb413c82950.camel@sipsolutions.net>
Date: Thu, 25 Oct 2018 22:47:58 +0200
From: Johannes Berg <johannes@...solutions.net>
To: Bart Van Assche <bvanassche@....org>, Tejun Heo <tj@...nel.org>
Cc: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Christoph Hellwig <hch@....de>,
Sagi Grimberg <sagi@...mberg.me>,
"tytso@....edu" <tytso@....edu>
Subject: Re: [PATCH 3/3] kernel/workqueue: Suppress a false positive lockdep
complaint
On Thu, 2018-10-25 at 13:39 -0700, Bart Van Assche wrote:
>
> > +void flush_workqueue_nested(struct workqueue_struct *wq, int subclass)
> > {
> > struct wq_flusher this_flusher = {
> > .list = LIST_HEAD_INIT(this_flusher.list),
> > @@ -2652,7 +2653,7 @@ void flush_workqueue(struct workqueue_struct *wq)
> > if (WARN_ON(!wq_online))
> > return;
> >
> > - lock_map_acquire(&wq->lockdep_map);
> > + lock_acquire_exclusive(&wq->lockdep_map, subclass, 0, NULL, _THIS_IP_);
> > lock_map_release(&wq->lockdep_map);
> >
> > mutex_lock(&wq->mutex);
> > [ ... ]
>
> I don't like this approach because it doesn't match how other kernel code uses
> lockdep annotations. All other kernel code I know of only annotates lock depmaps
> as nested if the same kernel thread calls lock_acquire() twice for the same depmap
> without intervening lock_release(). My understanding is that with your patch
> applied flush_workqueue_nested(wq, 1) calls lock_acquire() only once and with the
> subclass argument set to one. I think this will confuse other people who will read
> the workqueue implementation and who have not followed this conversation.
Hmm, yeah, that's a reasonable complaint. My mental model is more along
the lines of
"this is a different nested (layered) object instance"
rather than
"I'm nesting the locks",
so this code change fits well into my model. However, code like
infiniband/core/cma.c does in fact use it with nested object instances
*and* actually (and directly in the code) nested locks.
I think the "actual nesting" could possibly come up with workqueues,
like I described earlier: you have two objects with a workqueue each,
and the objects are somehow layered (like cma.c's listen_id and
conn_id), and each contains a workqueue, and some work running on the
listen_id's workqueue wants to destroy all the conn_id's workqueue(s).
In that case, you do end up really having the nesting.
So in that sense, I still think this API may still be required, and
(perhaps with an appropriate comment) could be used in the dio case even
if there's no actual nesting, without introducing the - IMHO single-use
and more dangerous - destroy_workqueue_no_drain().
johannes
Powered by blists - more mailing lists