lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Thu, 14 Jan 2010 00:39:42 -0500
From:	Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>
To:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Cc:	Lai Jiangshan <laijs@...fujitsu.com>,
	Steven Rostedt <rostedt@...dmis.org>,
	Oleg Nesterov <oleg@...hat.com>,
	Peter Zijlstra <peterz@...radead.org>,
	linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...e.hu>,
	akpm@...ux-foundation.org, josh@...htriplett.org,
	tglx@...utronix.de, Valdis.Kletnieks@...edu, dhowells@...hat.com,
	dipankar@...ibm.com
Subject: Re: [RFC PATCH] introduce sys_membarrier(): process-wide memory
	barrier

* Paul E. McKenney (paulmck@...ux.vnet.ibm.com) wrote:
> On Thu, Jan 14, 2010 at 10:56:08AM +0800, Lai Jiangshan wrote:
> > Paul E. McKenney wrote:
> > > On Mon, Jan 11, 2010 at 03:21:04PM -0500, Mathieu Desnoyers wrote:
> > >> * Paul E. McKenney (paulmck@...ux.vnet.ibm.com) wrote:
> > >>> On Sun, Jan 10, 2010 at 11:25:21PM -0500, Mathieu Desnoyers wrote:
> > >>>> * Paul E. McKenney (paulmck@...ux.vnet.ibm.com) wrote:
> > >>>> [...]
> > >>>>>> Even when taking the spinlocks, efficient iteration on active threads is
> > >>>>>> done with for_each_cpu(cpu, mm_cpumask(current->mm)), which depends on
> > >>>>>> the same cpumask, and thus requires the same memory barriers around the
> > >>>>>> updates.
> > >>>>> Ouch!!!  Good point and good catch!!!
> > >>>>>
> > >>>>>> We could switch to an inefficient iteration on all online CPUs instead,
> > >>>>>> and check read runqueue ->mm with the spinlock held. Is that what you
> > >>>>>> propose ? This will cause reading of large amounts of runqueue
> > >>>>>> information, especially on large systems running few threads. The other
> > >>>>>> way around is to iterate on all the process threads: in this case, small
> > >>>>>> systems running many threads will have to read information about many
> > >>>>>> inactive threads, which is not much better.
> > >>>>> I am not all that worried about exactly what we do as long as it is
> > >>>>> pretty obviously correct.  We can then improve performance when and as
> > >>>>> the need arises.  We might need to use any of the strategies you
> > >>>>> propose, or perhaps even choose among them depending on the number of
> > >>>>> threads in the process, the number of CPUs, and so forth.  (I hope not,
> > >>>>> but...)
> > >>>>>
> > >>>>> My guess is that an obviously correct approach would work well for a
> > >>>>> slowpath.  If someone later runs into performance problems, we can fix
> > >>>>> them with the added knowledge of what they are trying to do.
> > >>>>>
> > >>>> OK, here is what I propose. Let's choose between two implementations
> > >>>> (v3a and v3b), which implement two "obviously correct" approaches. In
> > >>>> summary:
> > >>>>
> > >>>> * baseline (based on 2.6.32.2)
> > >>>>    text	   data	    bss	    dec	    hex	filename
> > >>>>   76887	   8782	   2044	  87713	  156a1	kernel/sched.o
> > >>>>
> > >>>> * v3a: ipi to many using mm_cpumask
> > >>>>
> > >>>> - adds smp_mb__before_clear_bit()/smp_mb__after_clear_bit() before and
> > >>>>   after mm_cpumask stores in context_switch(). They are only executed
> > >>>>   when oldmm and mm are different. (it's my turn to hide behind an
> > >>>>   appropriately-sized boulder for touching the scheduler). ;) Note that
> > >>>>   it's not that bad, as these barriers turn into simple compiler barrier()
> > >>>>   on:
> > >>>>     avr32, blackfin, cris, frb, h8300, m32r, m68k, mn10300, score, sh,
> > >>>>     sparc, x86 and xtensa.
> > >>>>   The less lucky architectures gaining two smp_mb() are:
> > >>>>     alpha, arm, ia64, mips, parisc, powerpc and s390.
> > >>>>   ia64 is gaining only one smp_mb() thanks to its acquire semantic.
> > >>>> - size
> > >>>>    text	   data	    bss	    dec	    hex	filename
> > >>>>   77239	   8782	   2044	  88065	  15801	kernel/sched.o
> > >>>>   -> adds 352 bytes of text
> > >>>> - Number of lines (system call source code, w/o comments) : 18
> > >>>>
> > >>>> * v3b: iteration on min(num_online_cpus(), nr threads in the process),
> > >>>>   taking runqueue spinlocks, allocating a cpumask, ipi to many to the
> > >>>>   cpumask. Does not allocate the cpumask if only a single IPI is needed.
> > >>>>
> > >>>> - only adds sys_membarrier() and related functions.
> > >>>> - size
> > >>>>    text	   data	    bss	    dec	    hex	filename
> > >>>>   78047	   8782	   2044	  88873	  15b29	kernel/sched.o
> > >>>>   -> adds 1160 bytes of text
> > >>>> - Number of lines (system call source code, w/o comments) : 163
> > >>>>
> > >>>> I'll reply to this email with the two implementations. Comments are
> > >>>> welcome.
> > >>> Cool!!!  Just for completeness, I point out the following trivial
> > >>> implementation:
> > >>>
> > >>> /*
> > >>>  * sys_membarrier - issue memory barrier on current process running threads
> > >>>  *
> > >>>  * Execute a memory barrier on all running threads of the current process.
> > >>>  * Upon completion, the caller thread is ensured that all process threads
> > >>>  * have passed through a state where memory accesses match program order.
> > >>>  * (non-running threads are de facto in such a state)
> > >>>  *
> > >>>  * Note that synchronize_sched() has the side-effect of doing a memory
> > >>>  * barrier on each CPU.
> > >>>  */
> > >>> SYSCALL_DEFINE0(membarrier)
> > >>> {
> > >>> 	synchronize_sched();
> > >>> }
> > >>>
> > >>> This does unnecessarily hit all CPUs in the system, but has the same
> > >>> minimal impact that in-kernel RCU already has.  It has long latency,
> > >>> (milliseconds) which might well disqualify it from consideration for
> > >>> some applications.  On the other hand, it automatically batches multiple
> > >>> concurrent calls to sys_membarrier().
> > >> Benchmarking this implementation:
> > >>
> > >> 1000 calls to sys_membarrier() take:
> > >>
> > >> T=1: 0m16.007s
> > >> T=2: 0m16.006s
> > >> T=3: 0m16.010s
> > >> T=4: 0m16.008s
> > >> T=5: 0m16.005s
> > >> T=6: 0m16.005s
> > >> T=7: 0m16.005s
> > >>
> > >> For a 16 ms per call (my HZ is 250), as you expected. So this solution
> > >> brings a slowdown of 10,000 times compared to the IPI-based solution.
> > >> We'd be better off using signals instead.
> > > 
> > >>From a latency viewpoint, yes.  But synchronize_sched() consumes far
> > > less CPU time than do signals, avoids waking up sleeping CPUs, batches
> > > concurrent requests, and seems to be of some use in the kernel.  ;-)
> > > 
> > > But, as I said, just for completeness.
> > > 
> > > 							Thanx, Paul
> > 
> > 
> > Actually, I like this implementation.
> > (synchronize_sched() need be changed to synchronize_kernel_and_user_sched()
> > or something else)
> 
> The global memory barriers is indeed very much a side-effect of
> synchronize_sched(), not its main purpose, you are right that its name
> is a bit strange for this purpose.  ;-)

It's not a "synchronize_user_sched()" at all though. Because, as you say
below, it's only part of the solution. The rest of the synchronization
needed for RCU is performed by liburcu. The kernel system call, in this
proposal, is just one piece of the puzzle.

> 
> > IPI-implementation and signal-implementation cost too much.
> > and this implementation just wait until things are done, very low cost.
> > 
> > The time of kernel rcu G.P. is typically 3/HZ seconds
> > (for all implementations except preemptable rcu). It is a large
> > latency. but it's nothing important I think:
> > 1) user should also call synchronize_sched() rarely.
> > 2) If user care this latency, user can just implement a userland call_rcu
> 
> In the common case, you are correct.  On the other hand, we did need to
> do synchronize_rcu_expedited() and friends in the kernel, so it is
> reasonable to expect that user-level RCU uses will also need expedited
> interfaces.

Yes, I can foresee that some library users will require relatively fast
synchronize_rcu() execution. Even if there might be better designs based
on call_rcu() implementations (I currently have a defer_rcu() which is
quite close), we cannot force all library users to use such a perfect
design.

> 
> > userland_call_rcu() {
> > 	insert rcu_head to rcu_callback_list.
> > }
> > 
> > rcu_callback_thread()
> > {
> > 	for (;;) {
> > 		handl_list = rcu_callback_list;
> > 		rcu_callback_list = NULL;
> > 
> > 		userland_synchronize_sched();
> > 
> > 		handle the callback in handl_list
> > 	}
> > }
> > 3) kernel rcu VS userland IPI-implementation RCU:
> > userland_synchronize_sched() is less latency than kernel rcu?
> > userland has more priority to send a lot of IPIs?
> > It sounds crazy for me.
> 
> You say "crazy" as if it was a bad thing.  ;-)
> 
> (Sorry, couldn't resist...)
> 
> But it is important to keep in mind that sys_membarrier() is just one
> part of the user-level RCU implementation.  When you add in the necessary
> waiting on per-thread counters, the user-level RCU is probably not that
> much cheaper than the expedited in-kernel RCU primitives.

Indeed, these overheads are probably quite close.

> 
> > See also this email(2010-1-11) I sent to you offlist:
> > > /* Lai jiangshan define it for fun */
> > > #define synchronize_kernel_sched() synchronize_sched()
> > > 
> > > /* We can use the current RCU code to implement one of the following */
> > > extern void synchronize_kernel_and_user_sched(void);
> > > extern void synchronize_user_sched(void);
> > > 
> > > /*
> > >  * wait until all cpu(which in userspace) enter kernel and call mb()
> > >  * (recommend)
> > >  */
> > > extern void synchronize_user_mb(void);
> > > 
> > > void sys_membarrier(void)
> > > {
> > > 	/*
> > > 	 * 1) We add very little overhead to kernel, we just wait at kernel space.
> > > 	 * 2) Several processes which call sys_membarrier() wait the same *batch*.
> > > 	 */
> > > 
> > > 	synchronize_kernel_and_user_sched();
> > > 	/* OR synchronize_user_sched()/synchronize_user_mb() */
> > > }
> 
> If I am not getting too confused, Mathieu's latest patch does do
> synchronize_sched() for the non-expedited case.  Mathieu pointed it
> out in his email of January 9th, though not as a serious suggestion,
> from what I can tell.  Your (private) email was indeed next, so as far
> as I am concerned you do indeed share the credit/blame for suggesting
> use of synchronize_sched() as a long-latency/low-overhead implementation
> of sys_membarrier().
> 
> Mathieu, given that Lai has now posted publicly, could you please include
> at least note crediting him for the first serious suggestion of using
> synchronize_sched()?

Yep, will do in v6.

Thanks!

Mathieu

> 
> 							Thanx, Paul

-- 
Mathieu Desnoyers
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F  BA06 3F25 A8FE 3BAE 9A68
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ