[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150920220324.GA20859@leoy-linaro>
Date: Mon, 21 Sep 2015 06:03:24 +0800
From: Leo Yan <leo.yan@...aro.org>
To: Steve Muckle <steve.muckle@...aro.org>
Cc: Dietmar Eggemann <dietmar.eggemann@....com>,
Morten Rasmussen <Morten.Rasmussen@....com>,
"peterz@...radead.org" <peterz@...radead.org>,
"mingo@...hat.com" <mingo@...hat.com>,
"vincent.guittot@...aro.org" <vincent.guittot@...aro.org>,
"daniel.lezcano@...aro.org" <daniel.lezcano@...aro.org>,
"yuyang.du@...el.com" <yuyang.du@...el.com>,
"mturquette@...libre.com" <mturquette@...libre.com>,
"rjw@...ysocki.net" <rjw@...ysocki.net>,
Juri Lelli <Juri.Lelli@....com>,
"sgurrappadi@...dia.com" <sgurrappadi@...dia.com>,
"pang.xunlei@....com.cn" <pang.xunlei@....com.cn>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-pm@...r.kernel.org" <linux-pm@...r.kernel.org>
Subject: Re: [RFCv5 PATCH 32/46] sched: Energy-aware wake-up task placement
On Sun, Sep 20, 2015 at 11:39:16AM -0700, Steve Muckle wrote:
> On 09/18/2015 03:34 AM, Dietmar Eggemann wrote:
> >> Here should consider scenario for two groups have same capacity?
> >> This will benefit for the case LITTLE.LITTLE. So the code will be
> >> looks like below:
> >>
> >> int target_sg_cpu = INT_MAX;
> >>
> >> if (capacity_of(max_cap_cpu) <= target_max_cap &&
> >> task_fits_capacity(p, max_cap_cpu)) {
> >>
> >> if ((capacity_of(max_cap_cpu) == target_max_cap) &&
> >> (target_sg_cpu < max_cap_cpu))
> >> continue;
> >>
> >> target_sg_cpu = max_cap_cpu;
> >> sg_target = sg;
> >> target_max_cap = capacity_of(max_cap_cpu);
> >> }
> >>
> >
> > It's true that on your SMP system the target sched_group 'sg_target'
> > depends only on 'task_cpu(p)' because this determines sched_domain 'sd'
> > (and so the order of sched_groups for the iteration).
> >
> > So the current do-while loop to select 'sg_target' for an SMP system
> > makes little sense.
> >
> > But why should we favour the first sched_group (cluster) (the one w/ the
> > lower max_cap_cpu number) in this situation?
>
> Running the originally proposed code on a system with two identical
> clusters, it looks like we'll always end up doing an energy-aware search
> in the task's prev_cpu cluster (sched_group). If you had small tasks
> scattered across both clusters, energy_aware_wake_cpu() would not
> consider condensing them on a single cluster. Leo was this the issue you
> were seeing?
Exactly.
> However I think there may be negative side effects with the proposed
> policy above as well - won't this cause us to pack the first cluster
> until it's 100% full (running at fmax) before using the second cluster?
> That would also be bad for power.
In this case of CPU is running at fmax, it's true that
task_fits_capacity() will return true. But here i think
cpu_overutilized() also will return true, so that means scheduler will
go back to use CFS's old way for loading balance. Finally tasks also
will be spread into two clusters.
Also reviewed the profiling result on Hikey with this modification
[1], rt-app 6%/13%/19%/25% place 8 tasks into one cluster as
possible, but rt-app 31%/38%/44%/50% also will place tasks to second
cluster. NOTE, I get this conclusion from CPU idle's duty cycle, but
not from real power data.
[1] https://lists.linaro.org/pipermail/eas-dev/2015-September/000218.html
Thanks,
Leo Yan
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists