lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 27 Sep 2013 12:28:15 -0700
From:	Linus Torvalds <torvalds@...ux-foundation.org>
To:	Waiman Long <Waiman.Long@...com>
Cc:	Ingo Molnar <mingo@...e.hu>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Rik van Riel <riel@...hat.com>,
	Peter Hurley <peter@...leysoftware.com>,
	Davidlohr Bueso <davidlohr.bueso@...com>,
	Alex Shi <alex.shi@...el.com>,
	Tim Chen <tim.c.chen@...ux.intel.com>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Andrea Arcangeli <aarcange@...hat.com>,
	Matthew R Wilcox <matthew.r.wilcox@...el.com>,
	Dave Hansen <dave.hansen@...el.com>,
	Michel Lespinasse <walken@...gle.com>,
	Andi Kleen <andi@...stfloor.org>,
	"Chandramouleeswaran, Aswin" <aswin@...com>,
	"Norton, Scott J" <scott.norton@...com>
Subject: Re: [PATCH] rwsem: reduce spinlock contention in wakeup code path

On Fri, Sep 27, 2013 at 12:00 PM, Waiman Long <Waiman.Long@...com> wrote:
>
> On a large NUMA machine, it is entirely possible that a fairly large
> number of threads are queuing up in the ticket spinlock queue to do
> the wakeup operation. In fact, only one will be needed.  This patch
> tries to reduce spinlock contention by doing just that.
>
> A new wakeup field is added to the rwsem structure. This field is
> set on entry to rwsem_wake() and __rwsem_do_wake() to mark that a
> thread is pending to do the wakeup call. It is cleared on exit from
> those functions.

Ok, this is *much* simpler than adding the new MCS spinlock, so I'm
wondering what the performance difference between the two are.

I'm obviously always in favor of just removing lock contention over
trying to improve the lock scalability, so I really like Waiman's
approach over Tim's new MCS lock. Not because I dislike MCS locks in
general (or you, Tim ;), it's really more fundamental: I just
fundamentally believe more in trying to avoid lock contention than in
trying to improve lock behavior when that contention happens.

As such, I love exactly these kinds of things that Wainman's patch
does, and I'm heavily biased.

But I know I'm heavily biased, so I'd really like to get comparative
numbers for these things. Waiman, can you compare your patch with
Tim's (and Alex's) 6-patch series to make the rwsem's use MCS locks
for the spinlock?

The numbers Tim quotes for the MCS patch series ("high_systime (+7%)")
are lower than the ones you quote (16-20%), but that may be due to
hardware platform differences and just methodology. Tim was also
looking at exim performance.

So Tim/Waiman, mind comparing the two approaches on the setups you have?

                   Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists