lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Wed, 3 Jul 2019 16:37:44 -0400
From:   Joel Fernandes <joel@...lfernandes.org>
To:     "Paul E. McKenney" <paulmck@...ux.ibm.com>
Cc:     linux-kernel@...r.kernel.org, Davidlohr Bueso <dave@...olabs.net>,
        Josh Triplett <josh@...htriplett.org>,
        Lai Jiangshan <jiangshanlai@...il.com>,
        Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
        rcu@...r.kernel.org, Steven Rostedt <rostedt@...dmis.org>
Subject: Re: [RFC] rcuperf: Make rcuperf test more robust for !expedited mode

On Wed, Jul 03, 2019 at 10:23:44AM -0700, Paul E. McKenney wrote:
> On Wed, Jul 03, 2019 at 12:39:45AM -0400, Joel Fernandes (Google) wrote:
> > It is possible that rcuperf run concurrently with init starting up.
> > During this time, the system is running all grace periods as expedited.
> > However, rcuperf can also be run in a normal mode. The rcuperf test
> > depends on a holdoff before starting the test to ensure grace periods
> > start later. This works fine with the default holdoff time however it is
> > not robust in situations where init takes greater than the holdoff time
> > the finish running. Or, as in my case:
> > 
> > I modified the rcuperf test locally to also run a thread that did
> > preempt disable/enable in a loop. This had the effect of slowing down
> > init. The end result was "batches:" counter was 0. This was because only
> > expedited GPs seem to happen, not normal ones which led to the
> > rcu_state.gp_seq counter remaining constant across grace periods which
> > unexpectedly happen to be expedited.
> > 
> > This led me to debug that even though the test could be for normal GP
> > performance, because init has still not run enough, the
> > rcu_unexpedited_gp() call would not have run yet. In other words, the
> > test would concurrently with init booting in expedited GP mode.
> > 
> > To fix this properly, let us just check for whether rcu_unexpedited_gp()
> > was called yet before starting the writer test. With this, the holdoff
> > parameter could also be dropped or reduced to speed up the test.
> > 
> > Signed-off-by: Joel Fernandes (Google) <joel@...lfernandes.org>
> > ---
> > Please consider this patch as an RFC only! This is the first time I am
> > running the RCU performance tests, thanks!
> 
> Another approach is to create (say) a late_initcall() function that
> sets a global variable.  Then have the wait loop wait for that global
> variable to be set.  Or use an explicit wait/wakeup scheme, if you wish.
> 
> This has the virtue of keeping this (admittedly small) bit of complexit
> out of the core kernel.

Agreed, I thought of the late_initcall approach as well. I will respin the
patch to do that.

> > Question:
> > I actually did not know that expedited gp does not increment
> > rcu_state.gp_seq. Does expedited GPs not go through the same RCU-tree
> > machinery as non-expedited? If yes, why doesn't rcu_state.gp_seq
> > increment when we are expedited? If no, why not?
> 
> They are indeed (mostly) independent mechanisms.
> 
> This is in contrast to SRCU, where an expedited grace period does what
> you expect, causing all grace periods to do less waiting until the
> most recent expedited grace period has completed.
> 
> Why the difference?
> 
> o	Current SRCU uses have relatively few updates, so the decreases
> 	in batching effectiveness for normal grace periods are less
> 	troublesome than they would be for RCU.  Shortening RCU grace
> 	periods would significantly increase per-update overhead, for
> 	example, and less so for SRCU.
> 
> o	RCU uses a much more distributed design, which means that
> 	expediting an already-started RCU grace period would be more
> 	challenging than it is for SRCU.  The race conditions between
> 	an "expedite now!" event and the various changes in state for
> 	a normal RCU grace period would be challenging.
> 
> o	In addition, RCU's more distributed design results in
> 	higher latencies.  Expedited RCU grace periods simply bypass
> 	this and get much better latencies.
> 
> So, yes, normal and expedited RCU grace periods could be converged, but
> it does not seem like a good idea given current requirements.

Thanks a lot for the explanation of these subtleties, I really appreciate
that and it will serve as a great future reference for everyone (and for my notes!)

Thanks again!

- Joel

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ