[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140203145605.GL8874@twins.programming.kicks-ass.net>
Date: Mon, 3 Feb 2014 15:56:05 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Arjan van de Ven <arjan@...ux.intel.com>
Cc: Morten Rasmussen <morten.rasmussen@....com>,
Nicolas Pitre <nicolas.pitre@...aro.org>,
Daniel Lezcano <daniel.lezcano@...aro.org>,
Preeti U Murthy <preeti@...ux.vnet.ibm.com>,
Len Brown <len.brown@...el.com>,
Preeti Murthy <preeti.lkml@...il.com>,
"mingo@...hat.com" <mingo@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
"Rafael J. Wysocki" <rjw@...ysocki.net>,
LKML <linux-kernel@...r.kernel.org>,
"linux-pm@...r.kernel.org" <linux-pm@...r.kernel.org>,
Lists linaro-kernel <linaro-kernel@...ts.linaro.org>
Subject: Re: [RFC PATCH 3/3] idle: store the idle state index in the struct rq
Arjan, could you have a look at teaching your Thunderpants to wrap lines
at ~80 chars please?
On Mon, Feb 03, 2014 at 06:38:11AM -0800, Arjan van de Ven wrote:
> On 2/3/2014 4:54 AM, Morten Rasmussen wrote:
>
> >
> >I'm therefore not convinced that idle state index is the right thing to
> >give the scheduler. Using a cost metric would be better in my
> >opinion.
>
>
> I totally agree with this, and we may need two separate cost metrics
>
> 1) A latency driven one
> 2) A performance impact on
>
> first one is pretty much the exit latency related time, sort of a
> "expected time to first instruction" (currently menuidle has the
> 99.999% worst case number, which is not useful for this, but is a
> first approximation). This is obviously the dominating number for
> expected-short running tasks
>
> second on is more of a "is there any cache/TLB left or is it flushed"
> kind of metric. It's more tricky to compute, since what is the cost of
> an empty cache (or even a cache migration) after all.... .... but I
> suspect it's in part what the scheduler will care about more for
> expected-long running tasks.
Yeah, so currently we 'assume' cache hotness based on runtime; see
task_hot(). A hint that the CPU wiped its caches might help there.
We also used to measure the entire cache migration cost between all
topologies in the system. That got ripped out when CFS got introduced,
but there's been a few people wanting to bring that back because the
single migration cost thingy simply doesn't work too well for some
workloads.
The reason Ingo took it out was that these measured numbers would
slightly vary from boot to boot making it hard to compare performance
numbers across boots.
There's something to be said for either case I suppose.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists