lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 14 Aug 2015 11:28:28 +0100
From:	Morten Rasmussen <morten.rasmussen@....com>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	mingo@...hat.com, vincent.guittot@...aro.org,
	daniel.lezcano@...aro.org,
	Dietmar Eggemann <Dietmar.Eggemann@....com>,
	yuyang.du@...el.com, mturquette@...libre.com, rjw@...ysocki.net,
	Juri Lelli <Juri.Lelli@....com>, sgurrappadi@...dia.com,
	pang.xunlei@....com.cn, linux-kernel@...r.kernel.org,
	linux-pm@...r.kernel.org
Subject: Re: [RFCv5 PATCH 22/46] sched: Calculate energy consumption of
 sched_group

On Thu, Aug 13, 2015 at 05:34:17PM +0200, Peter Zijlstra wrote:
> On Tue, Jul 07, 2015 at 07:24:05PM +0100, Morten Rasmussen wrote:
> > +static unsigned int sched_group_energy(struct sched_group *sg_top)
> > +{
> > +	struct sched_domain *sd;
> > +	int cpu, total_energy = 0;
> > +	struct cpumask visit_cpus;
> > +	struct sched_group *sg;
> > +
> > +	WARN_ON(!sg_top->sge);
> > +
> > +	cpumask_copy(&visit_cpus, sched_group_cpus(sg_top));
> > +
> > +	while (!cpumask_empty(&visit_cpus)) {
> > +		struct sched_group *sg_shared_cap = NULL;
> > +
> > +		cpu = cpumask_first(&visit_cpus);
> > +
> > +		/*
> > +		 * Is the group utilization affected by cpus outside this
> > +		 * sched_group?
> > +		 */
> > +		sd = highest_flag_domain(cpu, SD_SHARE_CAP_STATES);
> > +		if (sd && sd->parent)
> > +			sg_shared_cap = sd->parent->groups;
> > +
> > +		for_each_domain(cpu, sd) {
> > +			sg = sd->groups;
> > +
> > +			/* Has this sched_domain already been visited? */
> > +			if (sd->child && group_first_cpu(sg) != cpu)
> > +				break;
> > +
> > +			do {
> > +				struct sched_group *sg_cap_util;
> > +				unsigned long group_util;
> > +				int sg_busy_energy, sg_idle_energy, cap_idx;
> > +
> > +				if (sg_shared_cap && sg_shared_cap->group_weight >= sg->group_weight)
> > +					sg_cap_util = sg_shared_cap;
> > +				else
> > +					sg_cap_util = sg;
> > +
> > +				cap_idx = find_new_capacity(sg_cap_util, sg->sge);
> 
> So here its not really 'new' capacity is it, most like the current
> capacity?

Yes, sort of. It is what the current capacity (P-state) should be to
accommodate the current utilization. Using a sane cpufreq governor it is
most likely not far off.

I could rename it to find_capacity() instead. It is extended in a
subsequent patch to figure out the 'new' capacity in cases were we
consider putting more utilization into the group.

> So in the case of coupled P states, you look for the CPU with highest
> utilization, as that is the on that determines the required P state.

Yes. That is why we need the SD_SHARE_CAP_STATES flag and we use
group_max_usage() in find_new_capacity().
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ