[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <e26e5199ebc5055b4f8ba6225bfe26cd05f4e0b6.1402303821.git.jslaby@suse.cz>
Date: Mon, 9 Jun 2014 10:50:37 +0200
From: Jiri Slaby <jslaby@...e.cz>
To: stable@...r.kernel.org
Cc: linux-kernel@...r.kernel.org,
Daeseok Youn <daeseok.youn@...il.com>,
Tejun Heo <tj@...nel.org>, Jiri Slaby <jslaby@...e.cz>
Subject: [PATCH 3.12 102/146] workqueue: fix bugs in wq_update_unbound_numa() failure path
From: Daeseok Youn <daeseok.youn@...il.com>
3.12-stable review patch. If anyone has any objections, please let me know.
===============
commit 77f300b198f93328c26191b52655ce1b62e202cf upstream.
wq_update_unbound_numa() failure path has the following two bugs.
- alloc_unbound_pwq() is called without holding wq->mutex; however, if
the allocation fails, it jumps to out_unlock which tries to unlock
wq->mutex.
- The function should switch to dfl_pwq on failure but didn't do so
after alloc_unbound_pwq() failure.
Fix it by regrabbing wq->mutex and jumping to use_dfl_pwq on
alloc_unbound_pwq() failure.
Signed-off-by: Daeseok Youn <daeseok.youn@...il.com>
Acked-by: Lai Jiangshan <laijs@...fujitsu.com>
Signed-off-by: Tejun Heo <tj@...nel.org>
Fixes: 4c16bd327c74 ("workqueue: implement NUMA affinity for unbound workqueues")
Signed-off-by: Jiri Slaby <jslaby@...e.cz>
---
kernel/workqueue.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 60fee69c37be..9ae0693ca520 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -4090,7 +4090,8 @@ static void wq_update_unbound_numa(struct workqueue_struct *wq, int cpu,
if (!pwq) {
pr_warning("workqueue: allocation failed while updating NUMA affinity of \"%s\"\n",
wq->name);
- goto out_unlock;
+ mutex_lock(&wq->mutex);
+ goto use_dfl_pwq;
}
/*
--
1.9.3
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists