lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150324163503.GZ23123@twins.programming.kicks-ass.net>
Date:	Tue, 24 Mar 2015 17:35:03 +0100
From:	Peter Zijlstra <peterz@...radead.org>
To:	Morten Rasmussen <morten.rasmussen@....com>
Cc:	mingo@...hat.com, vincent.guittot@...aro.org,
	dietmar.eggemann@....com, yuyang.du@...el.com,
	preeti@...ux.vnet.ibm.com, mturquette@...aro.org, nico@...aro.org,
	rjw@...ysocki.net, juri.lelli@....com, linux-kernel@...r.kernel.org
Subject: Re: [RFCv3 PATCH 33/48] sched: Energy-aware wake-up task placement

On Wed, Feb 04, 2015 at 06:31:10PM +0000, Morten Rasmussen wrote:
> +static int energy_aware_wake_cpu(struct task_struct *p)
> +{
> +	struct sched_domain *sd;
> +	struct sched_group *sg, *sg_target;
> +	int target_max_cap = SCHED_CAPACITY_SCALE;
> +	int target_cpu = task_cpu(p);
> +	int i;
> +
> +	sd = rcu_dereference(per_cpu(sd_ea, task_cpu(p)));
> +
> +	if (!sd)
> +		return -1;
> +
> +	sg = sd->groups;
> +	sg_target = sg;
> +	/* Find group with sufficient capacity */
> +	do {
> +		int sg_max_capacity = group_max_capacity(sg);
> +
> +		if (sg_max_capacity >= task_utilization(p) &&
> +				sg_max_capacity <= target_max_cap) {
> +			sg_target = sg;
> +			target_max_cap = sg_max_capacity;
> +		}
> +	} while (sg = sg->next, sg != sd->groups);
> +
> +	/* Find cpu with sufficient capacity */
> +	for_each_cpu_and(i, tsk_cpus_allowed(p), sched_group_cpus(sg_target)) {
> +		int new_usage = get_cpu_usage(i) + task_utilization(p);
> +
> +		if (new_usage >	capacity_orig_of(i))
> +			continue;
> +
> +		if (new_usage <	capacity_curr_of(i)) {
> +			target_cpu = i;
> +			if (!cpu_rq(i)->nr_running)
> +				break;
> +		}
> +
> +		/* cpu has capacity at higher OPP, keep it as fallback */
> +		if (target_cpu == task_cpu(p))
> +			target_cpu = i;
> +	}
> +
> +	if (target_cpu != task_cpu(p)) {
> +		struct energy_env eenv = {
> +			.usage_delta	= task_utilization(p),
> +			.src_cpu	= task_cpu(p),
> +			.dst_cpu	= target_cpu,
> +		};
> +
> +		/* Not enough spare capacity on previous cpu */
> +		if (cpu_overutilized(task_cpu(p), sd))
> +			return target_cpu;
> +
> +		if (energy_diff(&eenv) >= 0)
> +			return task_cpu(p);
> +	}
> +
> +	return target_cpu;
> +}

So while you have some cpufreq -> sched coupling (the capacity_curr
thing) this would be the site where you could provide sched -> cpufreq
coupling, right?

So does it make sense to at least put in the right hooks now? I realize
we'll likely take cpufreq out back and feed it to the bears but
something managing P states will be there whatever we'll call the new
fangled thing and this would be the place to hook it still.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ