[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190829173038.21040-4-daniel.m.jordan@oracle.com>
Date: Thu, 29 Aug 2019 13:30:32 -0400
From: Daniel Jordan <daniel.m.jordan@...cle.com>
To: Herbert Xu <herbert@...dor.apana.org.au>,
Steffen Klassert <steffen.klassert@...unet.com>
Cc: Lai Jiangshan <jiangshanlai@...il.com>,
Peter Zijlstra <peterz@...radead.org>,
Tejun Heo <tj@...nel.org>, linux-crypto@...r.kernel.org,
linux-kernel@...r.kernel.org,
Daniel Jordan <daniel.m.jordan@...cle.com>
Subject: [PATCH v2 3/9] workqueue: require CPU hotplug read exclusion for apply_workqueue_attrs
Change the calling convention for apply_workqueue_attrs to require CPU
hotplug read exclusion.
Avoids lockdep complaints about nested calls to get_online_cpus in a
future patch where padata calls apply_workqueue_attrs when changing
other CPU-hotplug-sensitive data structures with the CPU read lock
already held.
Signed-off-by: Daniel Jordan <daniel.m.jordan@...cle.com>
Acked-by: Tejun Heo <tj@...nel.org>
Acked-by: Steffen Klassert <steffen.klassert@...unet.com>
Cc: Herbert Xu <herbert@...dor.apana.org.au>
Cc: Lai Jiangshan <jiangshanlai@...il.com>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: linux-crypto@...r.kernel.org
Cc: linux-kernel@...r.kernel.org
---
kernel/workqueue.c | 19 ++++++++++++++-----
1 file changed, 14 insertions(+), 5 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index f53705ff3ff1..bc2e09a8ea61 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -4030,6 +4030,8 @@ static int apply_workqueue_attrs_locked(struct workqueue_struct *wq,
*
* Performs GFP_KERNEL allocations.
*
+ * Assumes caller has CPU hotplug read exclusion, i.e. get_online_cpus().
+ *
* Return: 0 on success and -errno on failure.
*/
int apply_workqueue_attrs(struct workqueue_struct *wq,
@@ -4037,9 +4039,11 @@ int apply_workqueue_attrs(struct workqueue_struct *wq,
{
int ret;
- apply_wqattrs_lock();
+ lockdep_assert_cpus_held();
+
+ mutex_lock(&wq_pool_mutex);
ret = apply_workqueue_attrs_locked(wq, attrs);
- apply_wqattrs_unlock();
+ mutex_unlock(&wq_pool_mutex);
return ret;
}
@@ -4152,16 +4156,21 @@ static int alloc_and_link_pwqs(struct workqueue_struct *wq)
mutex_unlock(&wq->mutex);
}
return 0;
- } else if (wq->flags & __WQ_ORDERED) {
+ }
+
+ get_online_cpus();
+ if (wq->flags & __WQ_ORDERED) {
ret = apply_workqueue_attrs(wq, ordered_wq_attrs[highpri]);
/* there should only be single pwq for ordering guarantee */
WARN(!ret && (wq->pwqs.next != &wq->dfl_pwq->pwqs_node ||
wq->pwqs.prev != &wq->dfl_pwq->pwqs_node),
"ordering guarantee broken for workqueue %s\n", wq->name);
- return ret;
} else {
- return apply_workqueue_attrs(wq, unbound_std_wq_attrs[highpri]);
+ ret = apply_workqueue_attrs(wq, unbound_std_wq_attrs[highpri]);
}
+ put_online_cpus();
+
+ return ret;
}
static int wq_clamp_max_active(int max_active, unsigned int flags,
--
2.23.0
Powered by blists - more mailing lists