[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <570E879A.90008@linaro.org>
Date: Wed, 13 Apr 2016 10:53:30 -0700
From: Steve Muckle <steve.muckle@...aro.org>
To: "Rafael J. Wysocki" <rafael@...nel.org>
Cc: Peter Zijlstra <peterz@...radead.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Ingo Molnar <mingo@...hat.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
"linux-pm@...r.kernel.org" <linux-pm@...r.kernel.org>,
Vincent Guittot <vincent.guittot@...aro.org>,
Morten Rasmussen <morten.rasmussen@....com>,
Juri Lelli <Juri.Lelli@....com>,
Patrick Bellasi <patrick.bellasi@....com>,
Michael Turquette <mturquette@...libre.com>
Subject: Re: [PATCH 1/2] sched/fair: move cpufreq hook to
update_cfs_rq_load_avg()
On 04/13/2016 07:45 AM, Rafael J. Wysocki wrote:
>> I'm concerned generally with the latency to react to changes in
>> > required capacity due to remote wakeups, which are quite common on SMP
>> > platforms with shared cache. Unless the hook is called it could take
>> > up to a tick to react AFAICS if the target CPU is running some other
>> > task that does not get preempted by the wakeup.
>
> So the scenario seems to be that CPU A is running task X and CPU B
> wakes up task Y on it remotely, but that task has to wait for CPU A to
> get to it, so you want to increase the frequency of CPU A at the
> wakeup time so as to reduce the time the woken up task has to wait.
>
> In that case task X would not be giving the CPU away (ie. no
> invocations of schedule()) for the whole tick, so it would be
> CPU/memory bound. In that case I would expect CPU A to be running at
> full capacity already unless this is the first tick period in which
> task X behaves this way which looks like a corner case to me.
This situation is fairly common in bursty workloads (such as UI driven
ones).
> Moreover, sending an IPI to CPU A in that case looks like the right
> thing to do to me anyway.
Sorry I didn't follow - sending an IPI to do what exactly? Perform the
wakeup operation on the target CPU?
thanks,
Steve
Powered by blists - more mailing lists