lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 4 May 2015 22:54:13 -0700
From:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:	Rik van Riel <riel@...hat.com>
Cc:	Paolo Bonzini <pbonzini@...hat.com>,
	Ingo Molnar <mingo@...nel.org>,
	Andy Lutomirski <luto@...capital.net>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	X86 ML <x86@...nel.org>, williams@...hat.com,
	Andrew Lutomirski <luto@...nel.org>, fweisbec@...hat.com,
	Peter Zijlstra <peterz@...radead.org>,
	Heiko Carstens <heiko.carstens@...ibm.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	Ingo Molnar <mingo@...hat.com>,
	Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: question about RCU dynticks_nesting

On Mon, May 04, 2015 at 04:53:16PM -0400, Rik van Riel wrote:
> On 05/04/2015 04:38 PM, Paul E. McKenney wrote:
> > On Mon, May 04, 2015 at 04:13:50PM -0400, Rik van Riel wrote:
> >> On 05/04/2015 04:02 PM, Paul E. McKenney wrote:
> 
> >>> Hmmm...  But didn't earlier performance measurements show that the bulk of
> >>> the overhead was the delta-time computations rather than RCU accounting?
> >>
> >> The bulk of the overhead was disabling and re-enabling
> >> irqs around the calls to rcu_user_exit and rcu_user_enter :)
> > 
> > Really???  OK...  How about software irq masking?  (I know, that is
> > probably a bit of a scary change as well.)
> > 
> >> Of the remaining time, about 2/3 seems to be the vtime
> >> stuff, and the other 1/3 the rcu code.
> > 
> > OK, worth some thought, then.
> > 
> >> I suspect it makes sense to optimize both, though the
> >> vtime code may be the easiest :)
> > 
> > Making a crude version that does jiffies (or whatever) instead of
> > fine-grained computations might give good bang for the buck.  ;-)
> 
> Ingo's idea is to simply have cpu 0 check the current task
> on all other CPUs, see whether that task is running in system
> mode, user mode, guest mode, irq mode, etc and update that
> task's vtime accordingly.
> 
> I suspect the runqueue lock is probably enough to do that,
> and between rcu state and PF_VCPU we probably have enough
> information to see what mode the task is running in, with
> just remote memory reads.
> 
> I looked at implementing the vtime bits (and am pretty sure
> how to do those now), and then spent some hours looking at
> the RCU bits, to see if we could not simplify both things at
> once, especially considering that the current RCU context
> tracking bits need to be called with irqs disabled.

Remotely sampling the vtime info without memory barriers makes sense.
After all, the result is statistical anyway.  Unfortunately, as noted
earlier, RCU correctness depends on ordering.

The current RCU idle entry/exit code most definitely absolutely
requires irqs be disabled.  However, I will see if that can be changed.
No promises, especially no short-term promises, but it does not feel
impossible.

You have RCU_FAST_NO_HZ=y, correct?  Could you please try measuring with
RCU_FAST_NO_HZ=n?  If that has a significant effect, easy quick win is
turning it off -- and I could then make it a boot parameter to get you
back to one kernel for everyone.  (The existing tick_nohz_active boot
parameter already turns it off, but also turns off dyntick idle, which
might be a bit excessive.)  Or if there is some way that the kernel can
know that the system is currently running on battery or some such.

							Thanx, Paul

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ