lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5547DC3C.1000504@redhat.com>
Date:	Mon, 04 May 2015 16:53:16 -0400
From:	Rik van Riel <riel@...hat.com>
To:	paulmck@...ux.vnet.ibm.com
CC:	Paolo Bonzini <pbonzini@...hat.com>,
	Ingo Molnar <mingo@...nel.org>,
	Andy Lutomirski <luto@...capital.net>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	X86 ML <x86@...nel.org>, williams@...hat.com,
	Andrew Lutomirski <luto@...nel.org>, fweisbec@...hat.com,
	Peter Zijlstra <peterz@...radead.org>,
	Heiko Carstens <heiko.carstens@...ibm.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	Ingo Molnar <mingo@...hat.com>,
	Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: question about RCU dynticks_nesting

On 05/04/2015 04:38 PM, Paul E. McKenney wrote:
> On Mon, May 04, 2015 at 04:13:50PM -0400, Rik van Riel wrote:
>> On 05/04/2015 04:02 PM, Paul E. McKenney wrote:

>>> Hmmm...  But didn't earlier performance measurements show that the bulk of
>>> the overhead was the delta-time computations rather than RCU accounting?
>>
>> The bulk of the overhead was disabling and re-enabling
>> irqs around the calls to rcu_user_exit and rcu_user_enter :)
> 
> Really???  OK...  How about software irq masking?  (I know, that is
> probably a bit of a scary change as well.)
> 
>> Of the remaining time, about 2/3 seems to be the vtime
>> stuff, and the other 1/3 the rcu code.
> 
> OK, worth some thought, then.
> 
>> I suspect it makes sense to optimize both, though the
>> vtime code may be the easiest :)
> 
> Making a crude version that does jiffies (or whatever) instead of
> fine-grained computations might give good bang for the buck.  ;-)

Ingo's idea is to simply have cpu 0 check the current task
on all other CPUs, see whether that task is running in system
mode, user mode, guest mode, irq mode, etc and update that
task's vtime accordingly.

I suspect the runqueue lock is probably enough to do that,
and between rcu state and PF_VCPU we probably have enough
information to see what mode the task is running in, with
just remote memory reads.

I looked at implementing the vtime bits (and am pretty sure
how to do those now), and then spent some hours looking at
the RCU bits, to see if we could not simplify both things at
once, especially considering that the current RCU context
tracking bits need to be called with irqs disabled.

-- 
All rights reversed
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ