[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100127101342.GC12522@basil.fritz.box>
Date: Wed, 27 Jan 2010 11:13:42 +0100
From: Andi Kleen <andi@...stfloor.org>
To: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Cc: Andi Kleen <andi@...stfloor.org>, linux-kernel@...r.kernel.org,
mingo@...e.hu, laijs@...fujitsu.com, dipankar@...ibm.com,
akpm@...ux-foundation.org, mathieu.desnoyers@...ymtl.ca,
josh@...htriplett.org, dvhltc@...ibm.com, niv@...ibm.com,
tglx@...utronix.de, peterz@...radead.org, rostedt@...dmis.org,
Valdis.Kletnieks@...edu, dhowells@...hat.com, arjan@...radead.org
Subject: Re: [PATCH RFC tip/core/rcu] accelerate grace period if last
non-dynticked CPU
> > Can't you simply check that at runtime then?
> >
> > if (num_possible_cpus() > 20)
> > ...
> >
> > BTW the new small is large. This years high end desktop PC will come with
> > upto 12 CPU threads. It would likely be challenging to find a good
> > number for 20 that holds up with the future.
>
> And this was another line of reasoning that lead me to the extra kernel
> config parameter.
Which doesn't solve the problem at all.
> > Or better perhaps have some threshold that you don't do it
> > that often, or only do it when you expect to be idle for a long
> > enough time that the CPU can enter deeper idle states
> >
> > (I higher idle states some more wakeups typically don't matter
> > that much)
> >
> > The cpufreq/cstate governour have a reasonable good idea
> > now how "idle" the system is and will be. Maybe you can reuse
> > that information somehow.
>
> My first thought was to find an existing "I am a small device running on
> battery power" or "low power consumption is critical to me" config
> parameter. I didn't find anything that looked like that. If there was
> one, I would make RCU_FAST_NO_HZ depend on it.
>
> Or did I miss some kernel parameter or API?
There are a few for scalability (e.g. numa_distance()), but they're
obscure. The really good ones are just known somewhere.
But I think in this case scalability is not the key thing to check
for, but expected idle latency. Even on a large system if near all
CPUs are idle spending some time to keep them idle even longer is a good
thing. But only if the CPUs actually benefit from long idle.
There's the "pm_qos_latency" frame work that could be used for this
in theory, but it's not 100% the right match because it's not
dynamic.
Unfortunately last time I looked the interfaces were rather clumpsy
(e.g. don't allow interrupt level notifiers)
Better would be some insight into the expected future latency:
look at exporting this information from the various frequency/idle
governours.
Perhaps pm_qos_latency could be extended to support that?
CC Arjan, maybe he has some ideas on that.
After all of that there would be still of course the question
what the right latency threshold would be, but at least that's
a much easier question that number of CPUs.
-Andi
--
ak@...ux.intel.com -- Speaking for myself only.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists