lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5585AAA0.1030305@sr71.net>
Date:	Sat, 20 Jun 2015 11:02:08 -0700
From:	Dave Hansen <dave@...1.net>
To:	paulmck@...ux.vnet.ibm.com
CC:	Andi Kleen <ak@...ux.intel.com>, dave.hansen@...ux.intel.com,
	akpm@...ux-foundation.org, jack@...e.cz, viro@...iv.linux.org.uk,
	eparis@...hat.com, john@...nmccutchan.com, rlove@...ve.org,
	tim.c.chen@...ux.intel.com, linux-kernel@...r.kernel.org
Subject: Re: [RFC][PATCH] fs: optimize inotify/fsnotify code for unwatched
 files

On 06/19/2015 07:21 PM, Paul E. McKenney wrote:
>>> > > What is so expensive in it? Just the memory barrier in it?
>> > 
>> > The profiling doesn't hit on the mfence directly, but I assume that the
>> > overhead is coming from there.  The "mov    0x8(%rdi),%rcx" is identical
>> > before and after the barrier, but it appears much more expensive
>> > _after_.  That makes no sense unless the barrier is the thing causing it.
> OK, one thing to try is to simply delete the memory barrier.  The
> resulting code will be unsafe, but will probably run well enough to
> get benchmark results.  If it is the memory barrier, you should of
> course get increased throughput.

So I took the smp_mb() out of __srcu_read_lock().  The benchmark didn't
improve at all.  Looking at the profile, all of the overhead had just
shifted to __srcu_read_unlock() and its memory barrier!  Removing the
barrier in __srcu_read_unlock() got essentially the same gains out of
the benchmark as the original patch in this thread that just avoids RCU.

I think that's fairly conclusive that the source of the overhead is,
indeed, the memory barriers.

Although I said this test was single threaded, I also had another
thought.  The benchmark is single-threaded, but 'perf' is sitting doing
profiling and who knows what else on the other core, and the profiling
NMIs are certainly writing plenty of data to memory.  So, there might be
plenty of work for that smp_mb()/mfence to do _despite_ the benchmark
itself being single threaded.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ