[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180626180303.GD3593@linux.vnet.ibm.com>
Date: Tue, 26 Jun 2018 11:03:03 -0700
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: linux-kernel@...r.kernel.org, mingo@...nel.org,
jiangshanlai@...il.com, dipankar@...ibm.com,
akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
josh@...htriplett.org, tglx@...utronix.de, rostedt@...dmis.org,
dhowells@...hat.com, edumazet@...gle.com, fweisbec@...il.com,
oleg@...hat.com, joel@...lfernandes.org
Subject: Re: [PATCH tip/core/rcu 06/27] rcu: Mark task as .need_qs less
aggressively
On Tue, Jun 26, 2018 at 07:08:12PM +0200, Peter Zijlstra wrote:
> On Mon, Jun 25, 2018 at 05:34:52PM -0700, Paul E. McKenney wrote:
> > If any scheduling-clock interrupt interrupts an RCU-preempt read-side
> > critical section, the interrupted task's ->rcu_read_unlock_special.b.need_qs
> > field is set. This causes the outermost rcu_read_unlock() to incur the
> > extra overhead of calling into rcu_read_unlock_special(). This commit
> > reduces that overhead by setting ->rcu_read_unlock_special.b.need_qs only
> > if the grace period has been in effect for more than one second.
>
> Even less agressive is never setting it at all.
True, but if the CPU has been in an RCU read-side critical section for
a full second (which is the case with high probability when .b.need_qs
is set after this change), we might want to respond to the end of that
critical section sooner rather than later.
> Changelog fails to explain why not setting it every tick is correct, nor
> why 1s is a 'safe' value to use.
The RCU CPU stall warning cannot be set to less than 3s, so 1s is
reasonable. It is a tradeoff -- setting it lower causes a greater
fraction of RCU read-side critical sections to incur extra overhead at
rcu_read_unlock() time, while setting it higher keeps a lazy approach
to reporting the quiescent state to core RCU for longer critical sections.
The upcoming RCU-bh/RCU-preempt/RCU-sched consolidation will raise
contention and overhead, so this is one of several things done to
lower overhead and contention to compensate for that.
Thanx, Paul
Powered by blists - more mailing lists