[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200528080844.37wgxcy77uu7pmmz@linutronix.de>
Date: Thu, 28 May 2020 10:08:44 +0200
From: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
To: Lai Jiangshan <laijs@...ux.alibaba.com>
Cc: linux-kernel@...r.kernel.org,
Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...nel.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Tejun Heo <tj@...nel.org>,
Lai Jiangshan <jiangshanlai@...il.com>
Subject: Re: [PATCH 1/2] workqueue: pin the pool while it is managing
On 2020-05-28 03:06:55 [+0000], Lai Jiangshan wrote:
> So that put_unbound_pool() can ensure all workers in idle,
> no unfinished manager. And it doens't need to wait any manager
> and can go to delete all the idle workers straight away.
>
> Also removes manager waitqueue, because it is unneeded and as
> Sebastian Andrzej Siewior said:
>
> The workqueue code has it's internal spinlock (pool::lock) and also
> implicit spinlock usage in the wq_manager waitqueue. These spinlocks
> are converted to 'sleeping' spinlocks on a RT-kernel.
>
> Workqueue functions can be invoked from contexts which are truly atomic
> even on a PREEMPT_RT enabled kernel. Taking sleeping locks from such
> contexts is forbidden.
>
> pool::lock can be converted to a raw spinlock as the lock held times
> are short. But the workqueue manager waitqueue is handled inside of
> pool::lock held regions which again violates the lock nesting rules
> of raw and regular spinlocks.
This seems to work for my test case I had test my chance. And lockdep
didn't complain so…
If you prefer this over my 1/2 what do we do about 2/2? Do you want me
to repost it?
Sebastian
Powered by blists - more mailing lists