lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 3 Feb 2016 09:33:39 +0100
From:	Ingo Molnar <mingo@...nel.org>
To:	Will Deacon <will.deacon@....com>
Cc:	Linus Torvalds <torvalds@...ux-foundation.org>,
	Boqun Feng <boqun.feng@...il.com>,
	Paul McKenney <paulmck@...ux.vnet.ibm.com>,
	Peter Zijlstra <peterz@...radead.org>,
	"Maciej W. Rozycki" <macro@...tec.com>,
	David Daney <ddaney@...iumnetworks.com>,
	Måns Rullgård <mans@...sr.com>,
	Ralf Baechle <ralf@...ux-mips.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [RFC][PATCH] mips: Fix arch_spin_unlock()


* Will Deacon <will.deacon@....com> wrote:

> On Tue, Feb 02, 2016 at 10:06:36AM -0800, Linus Torvalds wrote:
> > On Tue, Feb 2, 2016 at 9:51 AM, Will Deacon <will.deacon@....com> wrote:
> > >
> > > Given that the vast majority of weakly ordered architectures respect
> > > address dependencies, I would expect all of them to be hurt if they
> > > were forced to use barrier instructions instead, even those where the
> > > microarchitecture is fairly strongly ordered in practice.
> > 
> > I do wonder if it would be all that noticeable, though. I don't think
> > we've really had benchmarks.
> > 
> > For example, most of the RCU list traversal shows up on x86 - where
> > loads are already acquires. But they show up not because of that, but
> > because a RCU list traversal is pretty much always going to take the
> > cache miss.
> > 
> > So it would actually be interesting to just try it - what happens to
> > kernel-centric benchmarks (which are already fairly rare) on arm if we
> > change the rcu_dereference() to be a smp_load_acquire()?
> > 
> > Because maybe nothing happens at all. I don't think we've ever tried it.
> 
> FWIW, and this is by no means conclusive, I hacked that up quickly and ran 
> hackbench a few times on the nearest idle arm64 system. The results were 
> consistently ~4% slower using acquire for rcu_dereference.

Could you please double check that? The thing is that hackbench is a _notoriously_ 
unstable workload and very dependent on various small details such as kernel image 
layout and random per-bootup cache/memory layouts details.

In fact I'd suggest to test this via a quick runtime hack like this in rcupdate.h:

	extern int panic_timeout;

	...

	if (panic_timeout)
		smp_load_acquire(p);
	else
		typeof(*p) *________p1 = (typeof(*p) *__force)lockless_dereference(p);

(or so)

and then you can start a loop of hackbench runs, and in another terminal change 
the ordering primitive via:

   echo 1 > /proc/sys/kernel/panic		# smpload_acquire()
   echo 0 > /proc/sys/kernel/panic		# smp_read_barrier_depends()

without having to reboot the kernel.

Also, instead of using hackbench which has a too short runtime that makes it 
sensitive to scheduling micro-details, you could try the perf-bench hackbench 
work-alike where the number of loops is parametric:

 triton:~/tip> perf bench sched messaging -l 10000
 # Running 'sched/messaging' benchmark:
 # 20 sender and receiver processes per group
 # 10 groups == 400 processes run

     Total time: 4.532 [sec]

and you could get specific numbers of noise estimations via:

 triton:~/tip> perf stat --null --repeat 10 perf bench sched messaging -l 10000

 [...]

 Performance counter stats for 'perf bench sched messaging -l 10000' (10 runs):

       4.616404309 seconds time elapsed                                          ( +-  1.67% )

note that even with a repeat count of 10 runs and a loop count 100 times larger 
than the hackbench default, the intrinsic noise of this workload was still 1.6% - 
and that does not include boot-to-boot systematic noise.

It's very easy to get systemic noise with hackbench workloads and go down the 
entirely wrong road.

Of course, the numbers might also confirm your 4% figure!

Thanks,

	Ingo

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ