[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140424171619.GA11096@twins.programming.kicks-ass.net>
Date: Thu, 24 Apr 2014 19:16:19 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Tim Chen <tim.c.chen@...ux.intel.com>
Cc: Ingo Molnar <mingo@...e.hu>, linux-kernel@...r.kernel.org,
Andi Kleen <ak@...ux.intel.com>,
Len Brown <len.brown@...el.com>
Subject: Re: [PATCH] sched: Skip double execution of pick_next_task_fair
On Thu, Apr 24, 2014 at 10:06:07AM -0700, Tim Chen wrote:
> Yes, this version is more concise.
>
> >
> > Its a little more contained.
> >
> > --- a/kernel/sched/core.c
> > +++ b/kernel/sched/core.c
> > @@ -2636,8 +2636,14 @@ pick_next_task(struct rq *rq, struct tas
> > if (likely(prev->sched_class == class &&
> > rq->nr_running == rq->cfs.h_nr_running)) {
> > p = fair_sched_class.pick_next_task(rq, prev);
> > - if (likely(p && p != RETRY_TASK))
> > - return p;
> > + if (unlikely(p == RETRY_TASK))
> > + goto again;
> > +
> > + /* assumes fair_sched_class->next == idle_sched_class */
> > + if (unlikely(!p))
> > + p = pick_next_task_idle(rq, prev);
> Should be
> p = idle_sched_class.pick_next_task(rq, prev);
Indeed, already fixed that when my compiler complained.
> > +
> > + return p;
> > }
> >
> > again:
>
> I'll respin the patch with these changes.
No need; I have the following queued:
---
Subject: sched: Skip double execution of pick_next_task_fair
From: Peter Zijlstra <peterz@...radead.org>
Date: Thu, 24 Apr 2014 12:00:47 +0200
Tim wrote:
The current code will call pick_next_task_fair a second time in the
slow path if we did not pull any task in our first try. This is
really unnecessary as we already know no task can be pulled and it
doubles the delay for the cpu to enter idle.
We instrumented some network workloads and that saw that
pick_next_task_fair is frequently called twice before a cpu enters
idle. The call to pick_next_task_fair can add non trivial latency as
it calls load_balance which runs find_busiest_group on an hierarchy of
sched domains spanning the cpus for a large system. For some 4 socket
systems, we saw almost 0.25 msec spent per call of pick_next_task_fair
before a cpu can be idled.
Cc: Ingo Molnar <mingo@...e.hu>
Cc: Andi Kleen <ak@...ux.intel.com>
Cc: Len Brown <len.brown@...el.com>
Reported-by: Tim Chen <tim.c.chen@...ux.intel.com>
Signed-off-by: Peter Zijlstra <peterz@...radead.org>
Link: http://lkml.kernel.org/r/20140424100047.GP11096@twins.programming.kicks-ass.net
---
---
kernel/sched/core.c | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2605,8 +2605,14 @@ pick_next_task(struct rq *rq, struct tas
if (likely(prev->sched_class == class &&
rq->nr_running == rq->cfs.h_nr_running)) {
p = fair_sched_class.pick_next_task(rq, prev);
- if (likely(p && p != RETRY_TASK))
- return p;
+ if (unlikely(p == RETRY_TASK))
+ goto again;
+
+ /* assumes fair_sched_class->next == idle_sched_class */
+ if (unlikely(!p))
+ p = idle_sched_class.pick_next_task(rq, prev);
+
+ return p;
}
again:
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists