lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <551075D9.2040409@arm.com>
Date:	Mon, 23 Mar 2015 20:21:45 +0000
From:	Dietmar Eggemann <dietmar.eggemann@....com>
To:	Peter Zijlstra <peterz@...radead.org>,
	Morten Rasmussen <Morten.Rasmussen@....com>
CC:	Sai Gurrappadi <sgurrappadi@...dia.com>,
	"mingo@...hat.com" <mingo@...hat.com>,
	"vincent.guittot@...aro.org" <vincent.guittot@...aro.org>,
	"yuyang.du@...el.com" <yuyang.du@...el.com>,
	"preeti@...ux.vnet.ibm.com" <preeti@...ux.vnet.ibm.com>,
	"mturquette@...aro.org" <mturquette@...aro.org>,
	"nico@...aro.org" <nico@...aro.org>,
	"rjw@...ysocki.net" <rjw@...ysocki.net>,
	Juri Lelli <Juri.Lelli@....com>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	Peter Boonstoppel <pboonstoppel@...dia.com>
Subject: Re: [RFCv3 PATCH 30/48] sched: Calculate energy consumption of sched_group

On 23/03/15 16:47, Peter Zijlstra wrote:
> On Mon, Mar 16, 2015 at 02:15:46PM +0000, Morten Rasmussen wrote:
>> You are absolutely right. The current code is broken for system
>> topologies where all cpus share the same clock source. To be honest, it
>> is actually worse than that and you already pointed out the reason. We
>> don't have a way of representing top level contributions to power
>> consumption in RFCv3, as we don't have sched_group spanning all cpus in
>> single cluster system. For example, we can't represent L2 cache and
>> interconnect power consumption on such systems.
>>
>> In RFCv2 we had a system wide sched_group dangling by itself for that
>> purpose. We chose to remove that in this rewrite as it led to messy
>> code. In my opinion, a more elegant solution is to introduce an
>> additional sched_domain above the current top level which has a single
>> sched_group spanning all cpus in the system. That should fix the
>> SD_SHARE_CAP_STATES problem and allow us to attach power data for the
>> top level.
>
> Maybe remind us why this needs to be tied to sched_groups ? Why can't we
> attach the energy information to the domains?

Currently on our 2 cluster (big.LITTLE) system (cluster0: big cpus, 
cluster1: little cpus) we attach energy information onto all sg's in MC 
(cpu/core related energy data) and DIE sd level (cluster related energy 
data).

For an MC level (cpus sharing the same u-arch) attaching the energy 
information onto the sd is clearly much easier then attaching it onto 
the individual sg's.

But on DIE level when we want to figure out the cluster energy data for 
a cluster represented by an sg other than the first sg (sg0) than we 
would have to access its cluster energy data via the DIE sd of one of 
the cpus of this cluster. I haven't seen code actually doing that in CFS.

IMHO, the current code is always iterating over the sg's of the sd and 
accessing either sg (sched_group) or sg->sgc (sched_group_capacity) 
data. Our energy data follows the sched_group_capacity example.

> There is an additional problem with groups you've not yet discovered and
> that is overlapping groups. Certain NUMA topologies result in this.
> There the sum of cpus over the groups is greater than the total cpus in
> the domain.

Yeah, we haven't tried EAS on such a system nor did we enable 
FORCE_SD_OVERLAP sched feature for a long time.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ