[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1389929776.5409.27.camel@marge.simpson.net>
Date: Fri, 17 Jan 2014 04:36:16 +0100
From: Mike Galbraith <efault@....de>
To: Alex Shi <alex.shi@...aro.org>
Cc: mingo@...hat.com, peterz@...radead.org, morten.rasmussen@....com,
vincent.guittot@...aro.org, daniel.lezcano@...aro.org,
wangyun@...ux.vnet.ibm.com, linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH] sched: select_idle_sibling macro optimize
On Fri, 2014-01-17 at 10:14 +0800, Alex Shi wrote:
> On 01/16/2014 09:52 PM, Mike Galbraith wrote:
> > On Thu, 2014-01-16 at 21:13 +0800, Alex Shi wrote:
> >> Add Mike Galbraith.
> >>
> >> Any one like to give some comments?
> >>
> >> On 01/15/2014 10:23 PM, Alex Shi wrote:
> >>> If the sd domain just has one group, then we must be caught the
> >>> i == target later, and then goes to deeper level domain.
> >>> So just skip this domain checking to save some instructions.
> >>>
> >>> Signed-off-by: Alex Shi <alex.shi@...aro.org>
> >>> ---
> >>> kernel/sched/fair.c | 5 +++++
> >>> 1 file changed, 5 insertions(+)
> >>>
> >>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> >>> index c7395d9..3265fbc 100644
> >>> --- a/kernel/sched/fair.c
> >>> +++ b/kernel/sched/fair.c
> >>> @@ -4196,6 +4196,11 @@ static int select_idle_sibling(struct task_struct *p, int target)
> >>> sd = rcu_dereference(per_cpu(sd_llc, target));
> >>> for_each_lower_domain(sd) {
> >>> sg = sd->groups;
> >>> +
> >>> + /* skip single group domain */
> >>> + if (sg == sg->next)
> >>> + continue;
> >
> > When is that gonna happen?
>
> I had seen this in a Intel platform, you may have both CPU domain and MC
> domain layer, because the domain flag is different, then they can not be
> merged. and then the CPU domain just has one group.
But sd starts at MC.
-Mike
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists