lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 30 Sep 2013 08:57:03 +0200
From:	Ingo Molnar <mingo@...nel.org>
To:	Linus Torvalds <torvalds@...ux-foundation.org>
Cc:	Davidlohr Bueso <davidlohr@...com>,
	Waiman Long <Waiman.Long@...com>, Ingo Molnar <mingo@...e.hu>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Rik van Riel <riel@...hat.com>,
	Peter Hurley <peter@...leysoftware.com>,
	Davidlohr Bueso <davidlohr.bueso@...com>,
	Alex Shi <alex.shi@...el.com>,
	Tim Chen <tim.c.chen@...ux.intel.com>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Andrea Arcangeli <aarcange@...hat.com>,
	Matthew R Wilcox <matthew.r.wilcox@...el.com>,
	Dave Hansen <dave.hansen@...el.com>,
	Michel Lespinasse <walken@...gle.com>,
	Andi Kleen <andi@...stfloor.org>,
	"Chandramouleeswaran, Aswin" <aswin@...com>,
	"Norton, Scott J" <scott.norton@...com>
Subject: Re: [PATCH] rwsem: reduce spinlock contention in wakeup code path


* Linus Torvalds <torvalds@...ux-foundation.org> wrote:

> [...]
> 
> And your numbers for Ingo's patch:
> 
> > After testing Ingo's anon-vma rwlock_t conversion (v2) on a 8 socket, 
> > 80 core system with aim7, I am quite surprised about the numbers - 
> > considering the lack of queuing in rwlocks. A lot of the tests didn't 
> > show hardly any difference, but those that really contend this lock 
> > (with high amounts of users) benefited quite nicely:
> >
> > Alltests: +28% throughput after 1000 users and runtime was reduced from
> > 7.2 to 6.6 secs.
> >
> > Custom: +61% throughput after 100 users and runtime was reduced from 7
> > to 4.9 secs.
> >
> > High_systime: +40% throughput after 1000 users and runtime was reduced
> > from 19 to 15.5 secs.
> >
> > Shared: +30.5% throughput after 100 users and runtime was reduced from
> > 6.5 to 5.1 secs.
> >
> > Short: Lots of variance in the numbers, but avg of +29% throughput - 
> > no particular performance degradation either.
> 
> Are just overwhelming, in my opinion. The conversion *from* a spinlock 
> never had this kind of support behind it.

Agreed. Especially given how primitive rwlock_t is especially on 80 cores, 
this is really a no-brainer conversion.

I have to say I am surprised by the numbers - after so many years it's 
still amazing how powerful the "get work done and don't interrupt it" 
batching concept is in computing...

> Btw, did anybody run Ingo's patch with lockdep and the spinlock sleep 
> debugging code to verify that we haven't introduced any problems wrt 
> sleeping since the lock was converted into a rw-semaphore?
> 
> Because quite frankly, considering these kinds of numbers, I really 
> don't see how we could possibly make excuses for keeping that 
> rw-semaphore unless there is some absolutely _horrible_ latency issue?

Given that there's only about a dozen critical sections that this lock 
covers I simply cannot imagine any latency problem that couldn't be fixed 
in some other fashion. (shrinking the critical section, breaking up a bad 
loop, etc.)

[ Btw., if PREEMPT_RT goes upstream we might not even need to break
  latencies all that much: people whose usecase values scheduling latency
  above throughput would run such a critical section preemptible anyway. ]

Thanks,

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ