lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 8 Aug 2014 12:43:40 -0400
From:	Steven Rostedt <rostedt@...dmis.org>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	Oleg Nesterov <oleg@...hat.com>, linux-kernel@...r.kernel.org,
	mingo@...nel.org, laijs@...fujitsu.com, dipankar@...ibm.com,
	akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
	josh@...htriplett.org, tglx@...utronix.de, dhowells@...hat.com,
	edumazet@...gle.com, dvhart@...ux.intel.com, fweisbec@...il.com,
	bobby.prani@...il.com, masami.hiramatsu.pt@...achi.com
Subject: Re: [PATCH v3 tip/core/rcu 3/9] rcu: Add synchronous grace-period
 waiting for RCU-tasks

On Fri, 8 Aug 2014 18:27:14 +0200
Peter Zijlstra <peterz@...radead.org> wrote:

> On Fri, Aug 08, 2014 at 10:58:58AM -0400, Steven Rostedt wrote:
> 
> > > > No, they are also used by optimized kprobes. This is why optimized
> > > > kprobes depend on !CONFIG_PREEMPT. [ added Masami to the discussion ].
> > > 
> > > How do those work? Is that one where the INT3 relocates the instruction
> > > stream into an alternative 'text' and that JMPs back into the original
> > > stream at the end?
> > 
> > No, it's where we replace the 'int3' with a jump to a trampoline that
> > simulates an INT3. Speeds things up quite a bit.
> 
> OK, so the trivial 'fix' for that is to patch the probe site like:
> 
> 	preempt_disable();		INC	GS:%__preempt_count
> 	call trampoline;		CALL	0xDEADBEEF
> 	preempt_enable();		DEC	GS:%__preempt_count
> 					JNZ	1f
> 					CALL	___preempt_schedule
> 				1f:
> 
> At which point the preempt_disable/enable() are the read side primitives
> and call_rcu_sched/synchronize_sched are sufficient to release it.
> 
> With the per-cpu preempt count stuff we have on x86 that is 4
> instructions for the preempt_*() stuff -- they're 'big' instructions
> though, since 3 have memops and 2 have a segment prefix.
> 
> 

Now the question is, how do you do that atomically? And safely.
Currently, all we replace at the call sites is a nop that is added by
gcc -pg and us replacing the call mcount with it. That looks much more
complex than our current solution.

-- Steve
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists