[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150213133158.GP2896@worktop.programming.kicks-ass.net>
Date: Fri, 13 Feb 2015 14:31:58 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Preeti U Murthy <preeti@...ux.vnet.ibm.com>
Cc: tglx@...utronix.de, arjan@...ux.intel.com,
linux-kernel@...r.kernel.org, jacob.jun.pan@...el.com,
fweisbec@...il.com, frederic@...nel.org, daniel.lezcano@...aro.org,
amit.kucheria@...aro.org, edubezval@...il.com,
viresh.kumar@...aro.org, rui.zhang@...el.com
Subject: Re: [PATCH V2] idle/intel_powerclamp: Redesign idle injection to use
bandwidth control mechanism
On Mon, Feb 09, 2015 at 10:19:43AM +0530, Preeti U Murthy wrote:
> diff --git a/include/linux/sched.h b/include/linux/sched.h
> index 8db31ef..6a7ccb2 100644
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -3002,6 +3002,12 @@ extern long sched_getaffinity(pid_t pid, struct cpumask *mask);
>
> #ifdef CONFIG_CGROUP_SCHED
> extern struct task_group root_task_group;
> +extern int tg_set_cfs_quota(struct task_group *tg, long cfs_quota_us);
> +extern int tg_set_cfs_period(struct task_group *tg, long cfs_period_us);
> +#else
> +
> +static inline int tg_set_cfs_quota(struct task_group *tg, long cfs_quota_us);
> +static inline int tg_set_cfs_period(struct task_group *tg, long cfs_period_us);
> #endif /* CONFIG_CGROUP_SCHED */
Instead you might want to make the whole powerclamp thing depend on
CONFIG_CFS_BANDWIDTH.
Also, exposing these and root_task_group is of course vile. Not to
mention you change the user (cgroup) interface without mention.
In any case, I cannot see how this could ever work. Bandwidth is shared
across CPUs; nothing will even attempt to get CPUs to idle at the same
time.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists