lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <54361922.8070808@linux.vnet.ibm.com>
Date:	Thu, 09 Oct 2014 10:42:02 +0530
From:	Preeti U Murthy <preeti@...ux.vnet.ibm.com>
To:	Raghavendra KT <raghavendra.kt@...ux.vnet.ibm.com>
CC:	svaidy@...ux.vnet.ibm.com, Peter Zijlstra <peterz@...radead.org>,
	rjw@...ysocki.net, lizefan@...wei.com,
	Anton Blanchard <anton@...ba.org>, Tejun Heo <tj@...nel.org>,
	Paul McKenney <paulmck@...ux.vnet.ibm.com>,
	Ingo Molnar <mingo@...nel.org>, cgroups@...r.kernel.org,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] cpusets: Make cpus_allowed and mems_allowed masks hotplug
 invariant

Hi Raghu,

On 10/08/2014 08:24 PM, Raghavendra KT wrote:
> On Wed, Oct 8, 2014 at 12:37 PM, Preeti U Murthy
> <preeti@...ux.vnet.ibm.com> wrote:
>> There are two masks associated with cpusets. The cpus/mems_allowed
>> and effective_cpus/mems. On the legacy hierarchy both these masks
>> are consistent with each other. This is the intersection of their
>> value and the currently active cpus. This means that we destroy the
>> original values set in these masks on each cpu/mem hot unplug operation.
>>         As a consequence when we hot plug back the cpus/mems, the tasks
>> no longer run on them and performance degrades, inspite of having
>> resources to run on.
>>
>> This effect is not seen in the default hierarchy since the
>> allowed and effective masks are distinctly maintained.
>> allowed masks are never touched once configured and effective masks
>> alone are hotplug variant.
>>
>> This patch replicates the above design even for the legacy hierarchy,
>> so that:
>>
>> 1. Tasks always run on the cpus/memory nodes that they are allowed to run on
>> as long as they are online. The allowed masks are hotplug invariant.
>>
>> 2. When all cpus/memory nodes in a cpuset are hot unplugged out, the tasks
>> are moved to their nearest ancestor which has resources to run on.
> 
> Hi Preeti,
> 
> I may be missing some thing here could you please explain when do we get
> tasks move out of a cpuset after this patch and why it is even necessary?

On the legacy hierarchy the tasks are moved to their parents cpusets if
the cpuset to which they were initially bound becomes empty. What the
patch does has nothing to do with moving tasks when the cpuset to which
they are bound becomes empty.The point 2 above was mentioned to merely
state that this part of the behavior is not really changed with the
patch. The patch merely ensures that the original cpuset configuration
is not messed with during hotplug operations.

> 
> IIUC, with default hierarchy we should never hit a case where we have empty
> effective cpuset and hence remove_tasks_in_empty_cpuset should never happen. no?
> 
> if my assumption is correct then we should remove
> remove_tasks_in_empty_cpuset itself...

remove_tasks_in_empty_cpuset() is called on the legacy hierarchy when
the cpuset becomes empty, hence we require it. But you are right its not
called on the default hierarchy.

Regards
Preeti U Murthy
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ