[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150222061050.GX5745@linux.vnet.ibm.com>
Date: Sat, 21 Feb 2015 22:10:50 -0800
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Josh Triplett <josh@...htriplett.org>
Cc: Arjan van de Ven <arjan@...ux.intel.com>,
Peter Zijlstra <peterz@...radead.org>,
linux-kernel@...r.kernel.org, mingo@...nel.org,
laijs@...fujitsu.com, dipankar@...ibm.com,
akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
tglx@...utronix.de, rostedt@...dmis.org, dhowells@...hat.com,
edumazet@...gle.com, dvhart@...ux.intel.com, fweisbec@...il.com,
oleg@...hat.com, bobby.prani@...il.com
Subject: Re: [PATCH tip/core/rcu 0/4] Programmatic nestable expedited grace
periods
On Sat, Feb 21, 2015 at 07:58:07PM -0800, Josh Triplett wrote:
> On Sat, Feb 21, 2015 at 07:51:34AM -0800, Arjan van de Ven wrote:
> > >>
> > >>there's a few others as well that I'm chasing down...
> > >>.. but the flip side, prior to running ring 3 code, why NOT do fast expedites?
> > >
> > >It would be good to have before-and-after measurements of actual
> > >boot time. Are these numbers available?
> >
> > To show the boot time, I'm using the timestamp of the "Write protecting" line,
> > that's pretty much the last thing we print prior to ring 3 execution.
>
> That's a little sad; we ought to be write-protecting kernel read-only
> data as *early* as possible.
>
> > A kernel with default RCU behavior (inside KVM, only virtual devices) looks like this:
> >
> > [ 0.038724] Write protecting the kernel read-only data: 10240k
> >
> > a kernel with expedited RCU (using the command line option, so that I don't have
> > to recompile between measurements and thus am completely oranges-to-oranges)
> >
> > [ 0.031768] Write protecting the kernel read-only data: 10240k
> >
> > which, in percentage, is an 18% improvement.
>
> Nice improvement, but that suggests that we're spending far too much
> time waiting on RCU grace periods at boot time.
Let's see... 0.038724-0.031768=0.006956, or about seven milliseconds.
This might be as many as ten grace periods, but is more likely to be
about two of them. Of course, this counts only the grace periods after
the scheduler starts, as those prior to scheduler start are no-ops,
courtesy of your single-CPU optimization.
So, how many grace periods between scheduler start and init spawning
do you feel would be appropriate?
Thanx, Paul
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists