[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180420074456.GA4064@hirez.programming.kicks-ass.net>
Date: Fri, 20 Apr 2018 09:44:56 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Nicholas Piggin <npiggin@...il.com>
Cc: linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...hat.com>,
"Rafael J. Wysocki" <rjw@...ysocki.net>,
"Paul E . McKenney" <paulmck@...ux.vnet.ibm.com>
Subject: Re: [RFC PATCH] kernel/sched/core: busy wait before going idle
On Sun, Apr 15, 2018 at 11:31:49PM +1000, Nicholas Piggin wrote:
> This is a quick hack for comments, but I've always wondered --
> if we have a short term polling idle states in cpuidle for performance
> -- why not skip the context switch and entry into all the idle states,
> and just wait for a bit to see if something wakes up again.
Is that context switch so expensive?
And what kernel did you test on? We recently merged a bunch of patches
from Rafael that avoided disabling the tick for short idle predictions.
This also has a performance improvements for such workloads. Did your
kernel include those?
> It's not uncommon to see various going-to-idle work in kernel profiles.
> This might be a way to reduce that (and just the cost of switching
> registers and kernel stack to idle thread). This can be an important
> path for single thread request-response throughput.
So I feel that _if_ we do a spin here, it should only be long enough to
amortize the schedule switch context.
However, doing busy waits here has the downside that the 'idle' time is
not in fact fed into the cpuidle predictor.
Powered by blists - more mailing lists