[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200527152004.z3n5ccajp6stdcno@linutronix.de>
Date: Wed, 27 May 2020 17:20:04 +0200
From: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Tejun Heo <tj@...nel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Lai Jiangshan <jiangshanlai@...il.com>,
Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...nel.org>
Subject: Re: [PATCH 0/3] workqueue: Make the workqueue code PREEMPT_RT safe
On 2020-05-26 14:46:59 [-0700], Linus Torvalds wrote:
> On Tue, May 26, 2020 at 2:41 PM Linus Torvalds
> <torvalds@...ux-foundation.org> wrote:
> >
> > Almost all users of swait have historically been buggy and/or
> > pointless.
>
> Yeah, looking at this one, it really seems to fundamentally fall in
> the "pointless" category.
>
> So yeah - instead of extending swait with a new primitive that nobody
> else wants than this pointless case, just don't use swait at all.
>
> We have better models. We have "rcuwait", and we have
> "wake_up_process()". Either of which is simpler and more efficient
> than swait, and are actually useful. rcuwait isn't exactly widely
> used, but it has very nice semantics for when that is what you want.
> And wake_up_process() is both simple and straightforward, particularly
> when you already have a spinlock for protecting whatever state it is
> you're waking up on or waiting for.
rcuwait would be this:
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 891ccad5f2716..3259f52bfe765 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -301,7 +301,7 @@ static struct workqueue_attrs *wq_update_unbound_numa_attrs_buf;
static DEFINE_MUTEX(wq_pool_mutex); /* protects pools and workqueues list */
static DEFINE_MUTEX(wq_pool_attach_mutex); /* protects worker attach/detach */
static DEFINE_SPINLOCK(wq_mayday_lock); /* protects wq->maydays list */
-static DECLARE_WAIT_QUEUE_HEAD(wq_manager_wait); /* wait for manager to go away */
+static struct rcuwait manager_wait; /* wait for manager to go away */
static LIST_HEAD(workqueues); /* PR: list of all workqueues */
static bool workqueue_freezing; /* PL: have wqs started freezing? */
@@ -2140,7 +2140,7 @@ static bool manage_workers(struct worker *worker)
pool->manager = NULL;
pool->flags &= ~POOL_MANAGER_ACTIVE;
- wake_up(&wq_manager_wait);
+ rcuwait_wake_up(&manager_wait);
return true;
}
@@ -3504,6 +3504,17 @@ static void rcu_free_pool(struct rcu_head *rcu)
kfree(pool);
}
+static bool wq_manager_inactive(struct worker_pool *pool)
+{
+ spin_lock_irq(&pool->lock);
+
+ if (pool->flags & POOL_MANAGER_ACTIVE) {
+ spin_unlock_irq(&pool->lock);
+ return false;
+ }
+ return true;
+}
+
/**
* put_unbound_pool - put a worker_pool
* @pool: worker_pool to put
@@ -3540,9 +3551,8 @@ static void put_unbound_pool(struct worker_pool *pool)
* @pool's workers from blocking on attach_mutex. We're the last
* manager and @pool gets freed with the flag set.
*/
- spin_lock_irq(&pool->lock);
- wait_event_lock_irq(wq_manager_wait,
- !(pool->flags & POOL_MANAGER_ACTIVE), pool->lock);
+ rcuwait_wait_event(&manager_wait, wq_manager_inactive(pool),
+ TASK_UNINTERRUPTIBLE);
pool->flags |= POOL_MANAGER_ACTIVE;
while ((worker = first_idle_worker(pool)))
There is the unbalanced lock/unlock in wq_manager_inactive() which
sparse complains about. Other than that it looks straight forward.
> Linus
Sebastian
Powered by blists - more mailing lists