[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170531181958.GA4795@htj.duckdns.org>
Date: Wed, 31 May 2017 14:19:58 -0400
From: Tejun Heo <tj@...nel.org>
To: Johannes Berg <johannes@...solutions.net>
Cc: Lai Jiangshan <jiangshanlai@...il.com>,
linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: single-threaded wq lockdep is broken
Hello, Johannes.
On Sun, May 28, 2017 at 09:33:13PM +0200, Johannes Berg wrote:
> I suspect this is a long-standing bug introduced by all the pool rework
> you did at some point, but I don't really know nor can I figure out how
> to fix it right now. I guess it could possibly also be a lockdep issue,
> or an issue in how it's used, but I definitely know that this used to
> work (i.e. warn) back when I introduced the lockdep checking to the WQ
Hah, didn't know this worked.
> code. I was actually bitten by a bug like this, and erroneously
> dismissed it as not being the case because lockdep hadn't warned (and
> the actual deadlock debug output is basically not existent).
>
> In any case, the following code really should result in a warning from
> lockdep, but doesn't. If you comment in the #define DEADLOCK, it will
> actually cause a deadlock :)
...
> static void w1_wk(struct work_struct *w)
> {
> mutex_lock(&mtx);
> msleep(100);
> mutex_unlock(&mtx);
> }
...
> static int init(void)
> {
> wq = create_singlethread_workqueue("test");
> if (!wq)
> return -ENOMEM;
> INIT_WORK(&w1, w1_wk);
> INIT_WORK(&w2, w2_wk);
>
> #ifdef DEADLOCK
> queue_work(wq, &w1);
> queue_work(wq, &w2);
> #endif
> mutex_lock(&mtx);
> flush_work(&w2);
> mutex_unlock(&mtx);
So, it used to always create dependency between work items on
singlethread workqueues according to their queeing order? It
shouldn't be difficult to fix. I'll dig through the history and see
what happened.
Thanks.
--
tejun
Powered by blists - more mailing lists