[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20131122113300.GM10022@twins.programming.kicks-ass.net>
Date: Fri, 22 Nov 2013 12:33:00 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Cc: Jacob Pan <jacob.jun.pan@...ux.intel.com>,
Arjan van de Ven <arjan@...ux.intel.com>, lenb@...nel.org,
rjw@...ysocki.net, Eliezer Tamir <eliezer.tamir@...ux.intel.com>,
Chris Leech <christopher.leech@...el.com>,
David Miller <davem@...emloft.net>, rui.zhang@...el.com,
Mike Galbraith <bitbucket@...ine.de>,
Ingo Molnar <mingo@...nel.org>, hpa@...or.com,
Thomas Gleixner <tglx@...utronix.de>,
linux-kernel@...r.kernel.org, linux-pm@...r.kernel.org,
"Rafael J. Wysocki" <rafael.j.wysocki@...el.com>
Subject: Re: [PATCH 3/7] idle, thermal, acpi: Remove home grown idle
implementations
On Thu, Nov 21, 2013 at 08:20:36PM -0800, Paul E. McKenney wrote:
> The 6ms to 25ms range should be just fine as far as normal RCU grace
> periods are concerned. However, it does mean that expedited grace
> periods could be delayed: They normally take a few tens of microseconds,
> but if they were unlucky enough to show up during an idle injection,
> they would be magnified by two to three orders of magnitude, which is
> not pretty.
>
> Hence my suggestion of hooking into RCU on idle-injection start and end
> so that RCU considers that time period to be idle. Just like it does
> for user-mode execution on NO_HZ_FULL kernels, so I still don't see this
> approach to be a problem. I must confess that I still don't understand
> what Arjan doesn't like about it.
Using these patches it would indeed use the RCU idle machinery as per
the normal idle path.
If you can I can add more WARN_ON()s in play_idle() to ensure we're not
called while holding any RCU locks.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists