[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150504203801.GG5381@linux.vnet.ibm.com>
Date: Mon, 4 May 2015 13:38:01 -0700
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Rik van Riel <riel@...hat.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>,
Ingo Molnar <mingo@...nel.org>,
Andy Lutomirski <luto@...capital.net>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
X86 ML <x86@...nel.org>, williams@...hat.com,
Andrew Lutomirski <luto@...nel.org>, fweisbec@...hat.com,
Peter Zijlstra <peterz@...radead.org>,
Heiko Carstens <heiko.carstens@...ibm.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: question about RCU dynticks_nesting
On Mon, May 04, 2015 at 04:13:50PM -0400, Rik van Riel wrote:
> On 05/04/2015 04:02 PM, Paul E. McKenney wrote:
> > On Mon, May 04, 2015 at 03:39:25PM -0400, Rik van Riel wrote:
> >> On 05/04/2015 02:39 PM, Paul E. McKenney wrote:
> >>> On Mon, May 04, 2015 at 11:59:05AM -0400, Rik van Riel wrote:
> >>
> >>>> In fact, would we be able to simply use tsk->rcu_read_lock_nesting
> >>>> as an indicator of whether or not we should bother waiting on that
> >>>> task or CPU when doing synchronize_rcu?
> >>>
> >>> Depends on exactly what you are asking. If you are asking if I could add
> >>> a few more checks to preemptible RCU and speed up grace-period detection
> >>> in a number of cases, the answer is very likely "yes". This is on my
> >>> list, but not particularly high priority. If you are asking whether
> >>> CPU 0 could access ->rcu_read_lock_nesting of some task running on
> >>> some other CPU, in theory, the answer is "yes", but in practice that
> >>> would require putting full memory barriers in both rcu_read_lock()
> >>> and rcu_read_unlock(), so the real answer is "no".
> >>>
> >>> Or am I missing your point?
> >>
> >> The main question is "how can we greatly reduce the overhead
> >> of nohz_full, by simplifying the RCU extended quiescent state
> >> code called in the syscall fast path, and maybe piggyback on
> >> that to do time accounting for remote CPUs?"
> >>
> >> Your memory barrier answer above makes it clear we will still
> >> want to do the RCU stuff at syscall entry & exit time, at least
> >> on x86, where we already have automatic and implicit memory
> >> barriers.
> >
> > We do need to keep in mind that x86's automatic and implicit memory
> > barriers do not order prior stores against later loads.
> >
> > Hmmm... But didn't earlier performance measurements show that the bulk of
> > the overhead was the delta-time computations rather than RCU accounting?
>
> The bulk of the overhead was disabling and re-enabling
> irqs around the calls to rcu_user_exit and rcu_user_enter :)
Really??? OK... How about software irq masking? (I know, that is
probably a bit of a scary change as well.)
> Of the remaining time, about 2/3 seems to be the vtime
> stuff, and the other 1/3 the rcu code.
OK, worth some thought, then.
> I suspect it makes sense to optimize both, though the
> vtime code may be the easiest :)
Making a crude version that does jiffies (or whatever) instead of
fine-grained computations might give good bang for the buck. ;-)
Thanx, Paul
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists