lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 5 Sep 2013 13:10:44 +0800
From:	Libin <huawei.libin@...wei.com>
To:	<tj@...nel.org>
CC:	<linux-kernel@...r.kernel.org>, <wangyijing@...wei.com>,
	<guohanjun@...wei.com>, <wujianguo@...wei.com>
Subject: [PATCH] workqueue: fix ordered workqueue in multi NUMA nodes platform

In platform with multi NUMA nodes, there is no ordering guarantee
with the workqueue created by calling alloc_ordered_workqueue().

Add member ordered_pwq in structure workqueue_struct, used to hold
the first choice of pwq, in order to avoid breaking its ordering
guarantee under enqueueing work in different NUMA nodes.

Signed-off-by: Libin <huawei.libin@...wei.com>
---
 kernel/workqueue.c | 16 +++++++++++++---
 1 file changed, 13 insertions(+), 3 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 29b7985..42c6c29 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -249,6 +249,7 @@ struct workqueue_struct {
 
 	struct workqueue_attrs	*unbound_attrs;	/* WQ: only for unbound wqs */
 	struct pool_workqueue	*dfl_pwq;	/* WQ: only for unbound wqs */
+	struct pool_workqueue   *ordered_pwq;   /* WQ: only for ordered wqs */
 
 #ifdef CONFIG_SYSFS
 	struct wq_device	*wq_dev;	/* I: for sysfs interface */
@@ -1326,10 +1327,19 @@ retry:
 
 	/* pwq which will be used unless @work is executing elsewhere */
 	if (!(wq->flags & WQ_UNBOUND))
-		pwq = per_cpu_ptr(wq->cpu_pwqs, cpu);
-	else
-		pwq = unbound_pwq_by_node(wq, cpu_to_node(cpu));
+	    pwq = per_cpu_ptr(wq->cpu_pwqs, cpu);
+	else {
+	    pwq = unbound_pwq_by_node(wq, cpu_to_node(cpu));
+	    if (wq->flags & __WQ_ORDERED) {
+		mutex_lock(&wq->mutex);
+		if (wq->ordered_pwq == NULL)
+		    wq->ordered_pwq = pwq;
+		else
+		    pwq = wq->ordered_pwq;
+		mutex_unlock(&wq->mutex);
+	    }
 
+	}
 	/*
 	 * If @work was previously on a different pool, it might still be
 	 * running there, in which case the work needs to be queued on that
-- 
1.8.2.1


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ