[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Yha1LeX4OK3cLCV5@slm.duckdns.org>
Date: Wed, 23 Feb 2022 12:29:01 -1000
From: Tejun Heo <tj@...nel.org>
To: Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>
Cc: 0day robot <lkp@...el.com>, LKML <linux-kernel@...r.kernel.org>,
lkp@...ts.01.org, kernel test robot <oliver.sang@...el.com>
Subject: Re: [PATCH] workqueue: Use private WQ for schedule_on_each_cpu() API
On Thu, Feb 24, 2022 at 07:26:30AM +0900, Tetsuo Handa wrote:
> > The patch seems pretty wrong. What's problematic is system workqueue flushes
> > (which flushes the entire workqueue), not work item flushes.
>
> Why? My understanding is that
>
> flushing a workqueue waits for completion of all work items in that workqueue
>
> flushing a work item waits for for completion of that work item using
> a workqueue specified as of queue_work()
>
> and
>
> if a work item in some workqueue is blocked by other work in that workqueue
> (e.g. max_active limit, work items on that workqueue and locks they need),
> it has a risk of deadlock
>
> . Then, how can flushing a work item using system-wide workqueues be free of deadlock risk?
> Isn't it just "unlikely to deadlock" rather than "impossible to deadlock"?
If we're jamming system_wq with a combination of work items which need more
than max_active to make forward progress, we're stuck regardless of flushes.
What's needed at that point is increasing max_active (or something along
that line).
Thanks.
--
tejun
Powered by blists - more mailing lists