[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <55DC475B.2080502@arm.com>
Date: Tue, 25 Aug 2015 11:45:47 +0100
From: Juri Lelli <juri.lelli@....com>
To: Peter Zijlstra <peterz@...radead.org>,
Morten Rasmussen <Morten.Rasmussen@....com>
CC: "mingo@...hat.com" <mingo@...hat.com>,
"vincent.guittot@...aro.org" <vincent.guittot@...aro.org>,
"daniel.lezcano@...aro.org" <daniel.lezcano@...aro.org>,
Dietmar Eggemann <Dietmar.Eggemann@....com>,
"yuyang.du@...el.com" <yuyang.du@...el.com>,
"mturquette@...libre.com" <mturquette@...libre.com>,
"rjw@...ysocki.net" <rjw@...ysocki.net>,
"sgurrappadi@...dia.com" <sgurrappadi@...dia.com>,
"pang.xunlei@....com.cn" <pang.xunlei@....com.cn>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-pm@...r.kernel.org" <linux-pm@...r.kernel.org>
Subject: Re: [RFCv5 PATCH 38/46] sched: scheduler-driven cpu frequency selection
Hi Peter,
On 15/08/15 14:05, Peter Zijlstra wrote:
> On Tue, Jul 07, 2015 at 07:24:21PM +0100, Morten Rasmussen wrote:
>> +void cpufreq_sched_set_cap(int cpu, unsigned long capacity)
>> +{
>> + unsigned int freq_new, cpu_tmp;
>> + struct cpufreq_policy *policy;
>> + struct gov_data *gd;
>> + unsigned long capacity_max = 0;
>> +
>> + /* update per-cpu capacity request */
>> + __this_cpu_write(pcpu_capacity, capacity);
>> +
>> + policy = cpufreq_cpu_get(cpu);
>> + if (IS_ERR_OR_NULL(policy)) {
>> + return;
>> + }
>> +
>> + if (!policy->governor_data)
>> + goto out;
>> +
>> + gd = policy->governor_data;
>> +
>> + /* bail early if we are throttled */
>> + if (ktime_before(ktime_get(), gd->throttle))
>> + goto out;
>
> Isn't this the wrong place to throttle? Suppose you're getting multiple
> new tasks placed on this CPU, the first one would trigger this callback
> and start increasing freq..
>
> While we're still changing freq. (and therefore throttled), another task
> comes in which would again raise the freq.
>
> With this scheme you loose the latter freq. change and will not
> re-evaluate.
>
The way the policy is implemented, you should not have this problem.
For new tasks you actually jump to max freq, as new tasks util gets
initialized to 1024. For load balancing migrations we wait until
all the tasks are migrated and then we trigger an update.
> Any scheme that limits the callbacks to the actual hardware will have to
> buffer requests and once the hardware returns (be it through an
> interrupt or timeout) issue the latest request.
>
But, it is true that if the above events happened the other way around
(we trigger an update after load balancing and a new task arrives), we
may miss the opportunity to jump to max with the new task. In my mind
this is probably not a big deal, as we'll have a tick pretty soon that
will fix things anyway (saving us some complexity in the backend).
What you think?
Thanks,
- Juri
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists