lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 30 May 2018 16:18:04 +0200
From:   Juri Lelli <juri.lelli@...hat.com>
To:     Waiman Long <longman@...hat.com>
Cc:     Tejun Heo <tj@...nel.org>, Li Zefan <lizefan@...wei.com>,
        Johannes Weiner <hannes@...xchg.org>,
        Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...hat.com>, cgroups@...r.kernel.org,
        linux-kernel@...r.kernel.org, linux-doc@...r.kernel.org,
        kernel-team@...com, pjt@...gle.com, luto@...capital.net,
        Mike Galbraith <efault@....de>, torvalds@...ux-foundation.org,
        Roman Gushchin <guro@...com>,
        Patrick Bellasi <patrick.bellasi@....com>
Subject: Re: [PATCH v9 2/7] cpuset: Add new v2 cpuset.sched.domain_root flag

Hi,

On 29/05/18 09:41, Waiman Long wrote:

[...]

> +  cpuset.sched.domain_root
> +	A read-write single value file which exists on non-root
> +	cpuset-enabled cgroups.  It is a binary value flag that accepts
> +	either "0" (off) or "1" (on).  This flag is set by the parent
> +	and is not delegatable.
> +
> +	If set, it indicates that the current cgroup is the root of a
> +	new scheduling domain or partition that comprises itself and
> +	all its descendants except those that are scheduling domain
> +	roots themselves and their descendants.  The root cgroup is
> +	always a scheduling domain root.
> +
> +	There are constraints on where this flag can be set.  It can
> +	only be set in a cgroup if all the following conditions are true.
> +
> +	1) The "cpuset.cpus" is not empty and the list of CPUs are
> +	   exclusive, i.e. they are not shared by any of its siblings.
> +	2) The parent cgroup is also a scheduling domain root.
> +	3) There is no child cgroups with cpuset enabled.  This is
> +	   for eliminating corner cases that have to be handled if such
> +	   a condition is allowed.
> +
> +	Setting this flag will take the CPUs away from the effective
> +	CPUs of the parent cgroup.  Once it is set, this flag cannot
> +	be cleared if there are any child cgroups with cpuset enabled.
> +	Further changes made to "cpuset.cpus" is allowed as long as
> +	the first condition above is still true.

IIUC, with the configuration below

 cpuset.cpus.effective:6-11
 cgroup.controllers:cpuset
 cpuset.mems.effective:0-1
 cgroup.subtree_control:cpuset
 g1/cpuset.cpus.effective:0-5
 g1/cgroup.controllers:cpuset
 g1/cpuset.sched.load_balance:1
 g1/cpuset.mems.effective:0-1
 g1/cpuset.cpus:0-5
 g1/cpuset.sched.domain_root:1
 user.slice/cpuset.cpus.effective:6-11
 user.slice/cgroup.controllers:cpuset
 user.slice/cpuset.sched.load_balance:1
 user.slice/cpuset.mems.effective:0-1
 user.slice/cpuset.cpus:6-11
 user.slice/cpuset.sched.domain_root:0
 init.scope/cpuset.cpus.effective:6-11
 init.scope/cgroup.controllers:cpuset
 init.scope/cpuset.sched.load_balance:1
 init.scope/cpuset.mems.effective:0-1
 init.scope/cpuset.cpus:6-11
 init.scope/cpuset.sched.domain_root:0
 system.slice/cpuset.cpus.effective:6-11
 system.slice/cgroup.controllers:cpuset
 system.slice/cpuset.sched.load_balance:1
 system.slice/cpuset.mems.effective:0-1
 system.slice/cpuset.cpus:6-11
 system.slice/cpuset.sched.domain_root:0
 machine.slice/cpuset.cpus.effective:6-11
 machine.slice/cgroup.controllers:cpuset
 machine.slice/cpuset.sched.load_balance:1
 machine.slice/cpuset.mems.effective:0-1
 machine.slice/cpuset.cpus:6-11
 machine.slice/cpuset.sched.domain_root:0

I should be able to

 # echo 0-4 >g1/cpuset.cpus

?

It doesn't let me.

I'm not sure we actually want to allow that, but that's what would I
expect as per your text above.

Thanks,

- Juri

BTW: thanks a lot for your prompt feedback and hope it's OK if I keep
playing and asking questions. :)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ