[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150630220014.GA10916@cloud>
Date: Tue, 30 Jun 2015 15:00:15 -0700
From: josh@...htriplett.org
To: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Cc: linux-kernel@...r.kernel.org, mingo@...nel.org,
laijs@...fujitsu.com, dipankar@...ibm.com,
akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
tglx@...utronix.de, peterz@...radead.org, rostedt@...dmis.org,
dhowells@...hat.com, edumazet@...gle.com, dvhart@...ux.intel.com,
fweisbec@...il.com, oleg@...hat.com, bobby.prani@...il.com
Subject: Re: [PATCH RFC tip/core/rcu 0/5] Expedited grace periods
encouraging normal ones
On Tue, Jun 30, 2015 at 02:48:05PM -0700, Paul E. McKenney wrote:
> Hello!
>
> This series contains some highly experimental patches that allow normal
> grace periods to take advantage of the work done by concurrent expedited
> grace periods. This can reduce the overhead incurred by normal grace
> periods by eliminating the need for force-quiescent-state scans that
> would otherwise have happened after the expedited grace period completed.
> It is not clear whether this is a useful tradeoff. Nevertheless, this
> series contains the following patches:
While it makes sense to avoid unnecessarily delaying a normal grace
period if the expedited machinery has provided the necessary delay, I'm
also *deeply* concerned that this will create a new class of
nondeterministic performance issues. Something that uses RCU may
perform badly due to grace period latency, but then suddenly start
performing well because an unrelated task starts hammering expedited
grace periods. This seems particularly likely during boot, for
instance, where RCU grace periods can be a significant component of boot
time (when you're trying to boot to userspace in small fractions of a
second).
- Josh Triplett
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists