lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140213003311.GJ4250@linux.vnet.ibm.com>
Date:	Wed, 12 Feb 2014 16:33:11 -0800
From:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:	Frederic Weisbecker <fweisbec@...il.com>
Cc:	Tejun Heo <tj@...nel.org>, Kevin Hilman <khilman@...aro.org>,
	Lai Jiangshan <laijs@...fujitsu.com>,
	Zoran Markovic <zoran.markovic@...aro.org>,
	linux-kernel@...r.kernel.org,
	Shaibal Dutta <shaibal.dutta@...adcom.com>,
	Dipankar Sarma <dipankar@...ibm.com>
Subject: Re: [RFC PATCH] rcu: move SRCU grace period work to power efficient
 workqueue

On Thu, Feb 13, 2014 at 12:04:57AM +0100, Frederic Weisbecker wrote:
> On Wed, Feb 12, 2014 at 11:59:22AM -0800, Paul E. McKenney wrote:
> > On Wed, Feb 12, 2014 at 02:23:54PM -0500, Tejun Heo wrote:
> > > Hello,
> > > 
> > > On Wed, Feb 12, 2014 at 11:02:41AM -0800, Paul E. McKenney wrote:
> > > > +2.	Use the /sys/devices/virtual/workqueue/*/cpumask sysfs files
> > > > +	to force the WQ_SYSFS workqueues to run on the specified set
> > > > +	of CPUs.  The set of WQ_SYSFS workqueues can be displayed using
> > > > +	"ls sys/devices/virtual/workqueue".
> > > 
> > > One thing to be careful about is that once published, it becomes part
> > > of userland visible interface.  Maybe adding some words warning
> > > against sprinkling WQ_SYSFS willy-nilly is a good idea?
> > 
> > Good point!  How about the following?
> > 
> > 							Thanx, Paul
> > 
> > ------------------------------------------------------------------------
> > 
> > Documentation/kernel-per-CPU-kthreads.txt: Workqueue affinity
> > 
> > This commit documents the ability to apply CPU affinity to WQ_SYSFS
> > workqueues, thus offloading them from the desired worker CPUs.
> > 
> > Signed-off-by: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
> > Cc: Frederic Weisbecker <fweisbec@...il.com>
> > Cc: Tejun Heo <tj@...nel.org>
> > 
> > diff --git a/Documentation/kernel-per-CPU-kthreads.txt b/Documentation/kernel-per-CPU-kthreads.txt
> > index 827104fb9364..214da3a47a68 100644
> > --- a/Documentation/kernel-per-CPU-kthreads.txt
> > +++ b/Documentation/kernel-per-CPU-kthreads.txt
> > @@ -162,7 +162,16 @@ Purpose: Execute workqueue requests
> >  To reduce its OS jitter, do any of the following:
> >  1.	Run your workload at a real-time priority, which will allow
> >  	preempting the kworker daemons.
> > -2.	Do any of the following needed to avoid jitter that your
> > +2.	Use the /sys/devices/virtual/workqueue/*/cpumask sysfs files
> > +	to force the WQ_SYSFS workqueues to run on the specified set
> > +	of CPUs.  The set of WQ_SYSFS workqueues can be displayed using
> > +	"ls sys/devices/virtual/workqueue".  That said, the workqueues
> > +	maintainer would like to caution people against indiscriminately
> > +	sprinkling WQ_SYSFS across all the workqueues.  The reason for
> > +	caution is that it is easy to add WQ_SYSFS, but because sysfs
> > +	is part of the formal user/kernel API, it can be nearly impossible
> > +	to remove it, even if its addition was a mistake.
> > +3.	Do any of the following needed to avoid jitter that your
> 
> Acked-by: Frederic Weisbecker <fweisbec@...il.com>
> 
> I just suggest we append a small explanation about what WQ_SYSFS is about.
> Like:

Fair point!  I wordsmithed it into the following.  Seem reasonable?

							Thanx, Paul

------------------------------------------------------------------------

Documentation/kernel-per-CPU-kthreads.txt: Workqueue affinity

This commit documents the ability to apply CPU affinity to WQ_SYSFS
workqueues, thus offloading them from the desired worker CPUs.

Signed-off-by: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
Reviewed-by: Tejun Heo <tj@...nel.org>
Acked-by: Frederic Weisbecker <fweisbec@...il.com>

diff --git a/Documentation/kernel-per-CPU-kthreads.txt b/Documentation/kernel-per-CPU-kthreads.txt
index 827104fb9364..f3cd299fcc41 100644
--- a/Documentation/kernel-per-CPU-kthreads.txt
+++ b/Documentation/kernel-per-CPU-kthreads.txt
@@ -162,7 +162,18 @@ Purpose: Execute workqueue requests
 To reduce its OS jitter, do any of the following:
 1.	Run your workload at a real-time priority, which will allow
 	preempting the kworker daemons.
-2.	Do any of the following needed to avoid jitter that your
+2.	A given workqueue can be made visible in the sysfs filesystem
+	by passing the WQ_SYSFS to that workqueue's alloc_workqueue().
+	Such a workqueue can be confined to a given subset of the
+	CPUs using the /sys/devices/virtual/workqueue/*/cpumask sysfs
+	files.	The set of WQ_SYSFS workqueues can be displayed using
+	"ls sys/devices/virtual/workqueue".  That said, the workqueues
+	maintainer would like to caution people against indiscriminately
+	sprinkling WQ_SYSFS across all the workqueues.	The reason for
+	caution is that it is easy to add WQ_SYSFS, but because sysfs is
+	part of the formal user/kernel API, it can be nearly impossible
+	to remove it, even if its addition was a mistake.
+3.	Do any of the following needed to avoid jitter that your
 	application cannot tolerate:
 	a.	Build your kernel with CONFIG_SLUB=y rather than
 		CONFIG_SLAB=y, thus avoiding the slab allocator's periodic

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ