lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 28 May 2020 03:06:55 +0000
From:   Lai Jiangshan <laijs@...ux.alibaba.com>
To:     linux-kernel@...r.kernel.org
Cc:     Lai Jiangshan <laijs@...ux.alibaba.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...nel.org>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
        Tejun Heo <tj@...nel.org>,
        Lai Jiangshan <jiangshanlai@...il.com>
Subject: [PATCH 1/2] workqueue: pin the pool while it is managing

So that put_unbound_pool() can ensure all workers in idle,
no unfinished manager. And it doens't need to wait any manager
and can go to delete all the idle workers straight away.

Also removes manager waitqueue, because it is unneeded and as
Sebastian Andrzej Siewior said:

  The workqueue code has it's internal spinlock (pool::lock) and also
  implicit spinlock usage in the wq_manager waitqueue. These spinlocks
  are converted to 'sleeping' spinlocks on a RT-kernel.

  Workqueue functions can be invoked from contexts which are truly atomic
  even on a PREEMPT_RT enabled kernel. Taking sleeping locks from such
  contexts is forbidden.

  pool::lock can be converted to a raw spinlock as the lock held times
  are short. But the workqueue manager waitqueue is handled inside of
  pool::lock held regions which again violates the lock nesting rules
  of raw and regular spinlocks.

Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: Ingo Molnar <mingo@...nel.org>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Link: https://lore.kernel.org/lkml/20200527194633.1660952-2-bigeasy@linutronix.de
Signed-off-by: Lai Jiangshan <laijs@...ux.alibaba.com>
---
 kernel/workqueue.c | 23 ++++++++++++-----------
 1 file changed, 12 insertions(+), 11 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 891ccad5f271..fde10a5dba82 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -301,7 +301,6 @@ static struct workqueue_attrs *wq_update_unbound_numa_attrs_buf;
 static DEFINE_MUTEX(wq_pool_mutex);	/* protects pools and workqueues list */
 static DEFINE_MUTEX(wq_pool_attach_mutex); /* protects worker attach/detach */
 static DEFINE_SPINLOCK(wq_mayday_lock);	/* protects wq->maydays list */
-static DECLARE_WAIT_QUEUE_HEAD(wq_manager_wait); /* wait for manager to go away */
 
 static LIST_HEAD(workqueues);		/* PR: list of all workqueues */
 static bool workqueue_freezing;		/* PL: have wqs started freezing? */
@@ -2129,10 +2128,21 @@ __acquires(&pool->lock)
 static bool manage_workers(struct worker *worker)
 {
 	struct worker_pool *pool = worker->pool;
+	struct work_struct *work = list_first_entry(&pool->worklist,
+					struct work_struct, entry);
+	struct pool_workqueue *pwq = get_work_pwq(work);
 
 	if (pool->flags & POOL_MANAGER_ACTIVE)
 		return false;
 
+	/*
+	 * If @pwq is for an unbound wq, its base ref and the @pool's
+	 * ref may be put at any time due to an attribute change.
+	 * Pin @pwq to also pin the @pool until the manager is done
+	 * with it.
+	 */
+	get_pwq(pwq);
+
 	pool->flags |= POOL_MANAGER_ACTIVE;
 	pool->manager = worker;
 
@@ -2140,7 +2150,7 @@ static bool manage_workers(struct worker *worker)
 
 	pool->manager = NULL;
 	pool->flags &= ~POOL_MANAGER_ACTIVE;
-	wake_up(&wq_manager_wait);
+	put_pwq(pwq);
 	return true;
 }
 
@@ -3535,16 +3545,7 @@ static void put_unbound_pool(struct worker_pool *pool)
 		idr_remove(&worker_pool_idr, pool->id);
 	hash_del(&pool->hash_node);
 
-	/*
-	 * Become the manager and destroy all workers.  This prevents
-	 * @pool's workers from blocking on attach_mutex.  We're the last
-	 * manager and @pool gets freed with the flag set.
-	 */
 	spin_lock_irq(&pool->lock);
-	wait_event_lock_irq(wq_manager_wait,
-			    !(pool->flags & POOL_MANAGER_ACTIVE), pool->lock);
-	pool->flags |= POOL_MANAGER_ACTIVE;
-
 	while ((worker = first_idle_worker(pool)))
 		destroy_worker(worker);
 	WARN_ON(pool->nr_workers || pool->nr_idle);
-- 
2.20.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ