lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 27 Mar 2015 15:58:49 +0000
From:	Morten Rasmussen <morten.rasmussen@....com>
To:	Sai Gurrappadi <sgurrappadi@...dia.com>
Cc:	"peterz@...radead.org" <peterz@...radead.org>,
	"mingo@...hat.com" <mingo@...hat.com>,
	"vincent.guittot@...aro.org" <vincent.guittot@...aro.org>,
	Dietmar Eggemann <Dietmar.Eggemann@....com>,
	"yuyang.du@...el.com" <yuyang.du@...el.com>,
	"preeti@...ux.vnet.ibm.com" <preeti@...ux.vnet.ibm.com>,
	"mturquette@...aro.org" <mturquette@...aro.org>,
	"nico@...aro.org" <nico@...aro.org>,
	"rjw@...ysocki.net" <rjw@...ysocki.net>,
	Juri Lelli <Juri.Lelli@....com>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	Peter Boonstoppel <pboonstoppel@...dia.com>
Subject: Re: [RFCv3 PATCH 30/48] sched: Calculate energy consumption of
 sched_group

On Fri, Mar 20, 2015 at 06:40:39PM +0000, Sai Gurrappadi wrote:
> On 02/04/2015 10:31 AM, Morten Rasmussen wrote:
> > +/*
> > + * sched_group_energy(): Returns absolute energy consumption of cpus belonging
> > + * to the sched_group including shared resources shared only by members of the
> > + * group. Iterates over all cpus in the hierarchy below the sched_group starting
> > + * from the bottom working it's way up before going to the next cpu until all
> > + * cpus are covered at all levels. The current implementation is likely to
> > + * gather the same usage statistics multiple times. This can probably be done in
> > + * a faster but more complex way.
> > + */
> > +static unsigned int sched_group_energy(struct sched_group *sg_top)
> > +{
> > +	struct sched_domain *sd;
> > +	int cpu, total_energy = 0;
> > +	struct cpumask visit_cpus;
> > +	struct sched_group *sg;
> > +
> > +	WARN_ON(!sg_top->sge);
> > +
> > +	cpumask_copy(&visit_cpus, sched_group_cpus(sg_top));
> > +
> > +	while (!cpumask_empty(&visit_cpus)) {
> > +		struct sched_group *sg_shared_cap = NULL;
> > +
> > +		cpu = cpumask_first(&visit_cpus);
> > +
> > +		/*
> > +		 * Is the group utilization affected by cpus outside this
> > +		 * sched_group?
> > +		 */
> > +		sd = highest_flag_domain(cpu, SD_SHARE_CAP_STATES);
> > +		if (sd && sd->parent)
> > +			sg_shared_cap = sd->parent->groups;
> > +
> > +		for_each_domain(cpu, sd) {
> > +			sg = sd->groups;
> > +
> > +			/* Has this sched_domain already been visited? */
> > +			if (sd->child && cpumask_first(sched_group_cpus(sg)) != cpu)
> > +				break;
> > +
> > +			do {
> > +				struct sched_group *sg_cap_util;
> > +				unsigned group_util;
> > +				int sg_busy_energy, sg_idle_energy;
> > +				int cap_idx;
> > +
> > +				if (sg_shared_cap && sg_shared_cap->group_weight >= sg->group_weight)
> > +					sg_cap_util = sg_shared_cap;
> > +				else
> > +					sg_cap_util = sg;
> > +
> > +				cap_idx = find_new_capacity(sg_cap_util, sg->sge);
> > +				group_util = group_norm_usage(sg);
> > +				sg_busy_energy = (group_util * sg->sge->cap_states[cap_idx].power)
> > +										>> SCHED_CAPACITY_SHIFT;
> > +				sg_idle_energy = ((SCHED_LOAD_SCALE-group_util) * sg->sge->idle_states[0].power)
> > +										>> SCHED_CAPACITY_SHIFT;
> > +
> > +				total_energy += sg_busy_energy + sg_idle_energy;
> 
> Should normalize group_util with the newly found capacity instead of
> capacity_curr.

You're right. In the next patch when sched_group_energy() can be used
for energy predictions based on usage deltas group_util should be
normalized to the new capacity.

Thanks for spotting this mistake.

Morten
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ