lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 05 May 2012 00:49:58 +0530
From:	"Srivatsa S. Bhat" <srivatsa.bhat@...ux.vnet.ibm.com>
To:	a.p.zijlstra@...llo.nl, mingo@...nel.org, pjt@...gle.com,
	paul@...lmenage.org, akpm@...ux-foundation.org
Cc:	rjw@...k.pl, nacc@...ibm.com, paulmck@...ux.vnet.ibm.com,
	tglx@...utronix.de, seto.hidetoshi@...fujitsu.com, rob@...dley.net,
	tj@...nel.org, mschmidt@...hat.com, berrange@...hat.com,
	nikunj@...ux.vnet.ibm.com, vatsa@...ux.vnet.ibm.com,
	linux-kernel@...r.kernel.org, linux-doc@...r.kernel.org,
	linux-pm@...r.kernel.org, srivatsa.bhat@...ux.vnet.ibm.com
Subject: [PATCH v2 5/7] Docs, cpusets: Update the cpuset documentation

Add documentation for the newly introduced cpuset.actual_cpus file and
describe the new semantics for updating cpusets upon CPU hotplug.

Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@...ux.vnet.ibm.com>
Cc: stable@...r.kernel.org
---

 Documentation/cgroups/cpusets.txt |   43 +++++++++++++++++++++++++------------
 1 files changed, 29 insertions(+), 14 deletions(-)

diff --git a/Documentation/cgroups/cpusets.txt b/Documentation/cgroups/cpusets.txt
index cefd3d8..374b9d2 100644
--- a/Documentation/cgroups/cpusets.txt
+++ b/Documentation/cgroups/cpusets.txt
@@ -168,7 +168,12 @@ Each cpuset is represented by a directory in the cgroup file system
 containing (on top of the standard cgroup files) the following
 files describing that cpuset:
 
- - cpuset.cpus: list of CPUs in that cpuset
+ - cpuset.cpus: list of CPUs in that cpuset, as set by the user;
+		the kernel will not alter this upon CPU hotplug;
+		this file has read/write permissions
+ - cpuset.actual_cpus: list of CPUs actually available for the tasks in the
+                       cpuset; the kernel can change this in the event of
+		       CPU hotplug; this file is read-only
  - cpuset.mems: list of Memory Nodes in that cpuset
  - cpuset.memory_migrate flag: if set, move pages to cpusets nodes
  - cpuset.cpu_exclusive flag: is cpu placement exclusive?
@@ -640,16 +645,25 @@ prior 'cpuset.mems' setting, will not be moved.
 
 There is an exception to the above.  If hotplug functionality is used
 to remove all the CPUs that are currently assigned to a cpuset,
-then all the tasks in that cpuset will be moved to the nearest ancestor
-with non-empty cpus.  But the moving of some (or all) tasks might fail if
-cpuset is bound with another cgroup subsystem which has some restrictions
-on task attaching.  In this failing case, those tasks will stay
-in the original cpuset, and the kernel will automatically update
-their cpus_allowed to allow all online CPUs.  When memory hotplug
-functionality for removing Memory Nodes is available, a similar exception
-is expected to apply there as well.  In general, the kernel prefers to
-violate cpuset placement, over starving a task that has had all
-its allowed CPUs or Memory Nodes taken offline.
+then the cpuset hierarchy is traversed, searching for the nearest
+ancestor whose cpu mask has atleast one online cpu. Then the tasks in
+the empty cpuset will be run on the cpus specified in that ancestor's cpu mask.
+Note that during CPU hotplug operations, the tasks in a cpuset will not
+be moved from one cpuset to another; only the the cpu mask of that cpuset
+will be updated to ensure that there is atleast one online cpu, by trying
+to closely resemble the cpu mask of the nearest non-empty ancestor containing
+online cpus.
+
+When memory hotplug functionality for removing Memory Nodes is available,
+if all the memory nodes currently assigned to a cpuset are removed via
+hotplug, then all the tasks in that cpuset will be moved to the nearest
+ancestor with non-empty memory nodes. But the moving of some (or all)
+tasks might fail if cpuset is bound with another cgroup subsystem which
+has some restrictions on task attaching.  In this failing case, those
+tasks will stay in the original cpuset, and the kernel will automatically
+update their mems_allowed to allow all online nodes.
+In general, the kernel prefers to violate cpuset placement, over starving
+a task that has had all its allowed CPUs or Memory Nodes taken offline.
 
 There is a second exception to the above.  GFP_ATOMIC requests are
 kernel internal allocations that must be satisfied, immediately.
@@ -730,9 +744,10 @@ cgroup.event_control   cpuset.memory_spread_page
 cgroup.procs           cpuset.memory_spread_slab
 cpuset.cpu_exclusive   cpuset.mems
 cpuset.cpus            cpuset.sched_load_balance
-cpuset.mem_exclusive   cpuset.sched_relax_domain_level
-cpuset.mem_hardwall    notify_on_release
-cpuset.memory_migrate  tasks
+cpuset.actual_cpus     cpuset.sched_relax_domain_level
+cpuset.mem_exclusive   notify_on_release
+cpuset.mem_hardwall    tasks
+cpuset.memory_migrate
 
 Reading them will give you information about the state of this cpuset:
 the CPUs and Memory Nodes it can use, the processes that are using

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists