lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 6 Jun 2016 13:46:23 -0700
From:	Linus Torvalds <torvalds@...ux-foundation.org>
To:	Dave Hansen <dave.hansen@...el.com>
Cc:	"Chen, Tim C" <tim.c.chen@...el.com>,
	Ingo Molnar <mingo@...hat.com>,
	Davidlohr Bueso <dbueso@...e.de>,
	"Peter Zijlstra (Intel)" <peterz@...radead.org>,
	Jason Low <jason.low2@...com>,
	Michel Lespinasse <walken@...gle.com>,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	Waiman Long <waiman.long@...com>,
	Al Viro <viro@...iv.linux.org.uk>,
	LKML <linux-kernel@...r.kernel.org>
Subject: Re: performance delta after VFS i_mutex=>i_rwsem conversion

On Mon, Jun 6, 2016 at 1:00 PM, Dave Hansen <dave.hansen@...el.com> wrote:
>
> I tracked this down to the differences between:
>
>         rwsem_spin_on_owner() - false roughly 1% of the time
>         mutex_spin_on_owner() - false roughly 0.05% of the time
>
> The optimistic rwsem and mutex code look quite similar, but there is one
> big difference: a hunk of code in rwsem_spin_on_owner() stops the
> spinning for rwsems, but isn't present for mutexes in any form:
>
>>         if (READ_ONCE(sem->owner))
>>                 return true; /* new owner, continue spinning */
>>
>>         /*
>>          * When the owner is not set, the lock could be free or
>>          * held by readers. Check the counter to verify the
>>          * state.
>>          */
>>         count = READ_ONCE(sem->count);
>>         return (count == 0 || count == RWSEM_WAITING_BIAS);
>
> If I hack this out, I end up with:
>
>         d9171b9(mutex-original): 689179
>         9902af7(rwsem-hacked  ): 671706 (-2.5%)
>
> I think it's safe to say that this accounts for the majority of the
> difference in behavior.

So my gut feel is that we do want to have the same heuristics for
rwsems and mutexes (well, modulo possible actual semantic differences
due to the whole shared-vs-exclusive issues).

And I also suspect that the mutexes have gotten a lot more performance
tuning done on them, so it's likely the correct thing to try to make
the rwsem match the mutex code rather than the other way around.

I think we had Jason and Davidlohr do mutex work last year, let's see
if they agree on that "yes, the mutex case is the likely more tuned
case" feeling.

The fact that your performance improves when you do that obviously
then also validates the assumption that the mutex spinning is the
better optimized one.

> So, as it stands today in 4.7-rc1, mutexes end up yielding higher
> performance under contention.  But, they don't let them system go very
> idle, even under heavy contention, which seems rather wrong.  Should we
> be making rwsems spin more, or mutexes spin less?

I think performance is what matters. The fact that it performs better
with spinning is a big mark for spinning more.

Being idle under load is _not_ something we should see as a good
thing. Yes, yes, it would be lower power, but lock contention is *not*
a low-power load. Being slow under lock contention just tends to make
for more lock contention, and trying to increase idle time is almost
certainly the wrong thing to do.

Spinning behavior tends to have a secondary advantage too: it is a
hell of a lot nicer to do performance analysis on. So if you get lock
contention on real loads (as opposed to some extreme
unlink-microbenchmark), I think a lot of people will be happier seeing
the spinning behavior just because it helps pinpoint the problem in
ways idling does not.

So I think everything points to: "make rwsems do the same thing
mutexes do". But I'll let it locking maintainers pipe up. Peter? Ingo?

              Linus

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ