lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Tue, 16 Apr 2013 07:49:50 -0400 From: Waiman Long <Waiman.Long@...com> To: Ingo Molnar <mingo@...nel.org> CC: Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>, "H. Peter Anvin" <hpa@...or.com>, "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>, David Howells <dhowells@...hat.com>, Dave Jones <davej@...hat.com>, Clark Williams <williams@...hat.com>, Peter Zijlstra <peterz@...radead.org>, linux-kernel@...r.kernel.org, x86@...nel.org, linux-arch@...r.kernel.org, "Chandramouleeswaran, Aswin" <aswin@...com>, Davidlohr Bueso <davidlohr.bueso@...com>, "Norton, Scott J" <scott.norton@...com>, Rik van Riel <riel@...hat.com> Subject: Re: [PATCH 0/3 v2] mutex: Improve mutex performance by doing less atomic-ops & better spinning On 04/16/2013 05:12 AM, Ingo Molnar wrote: > * Waiman Long<Waiman.Long@...com> wrote: > >> [...] >> >> Patches 2 improves the mutex spinning process by reducing contention among the >> spinners when competing for the mutex. This is done by using a MCS lock to put >> the spinners in a queue so that only the first spinner will try to acquire the >> mutex when it is available. This patch showed significant performance >> improvement of +30% on the AIM7 fserver and new_fserver workload. > Ok, that's really nice - and this approach has no arbitrary limits/tunings in it. > > Do you have a performance comparison to your first series (patches 1+2+3 IIRC) - > how does this new series with MCS locking compare to the best previous result from > that old series? Do we now achieve that level of performance? Compared with the old patch set, the new patches 1+2 have over 30% performance gain in high user load (1100-1500) in the fserver and new_fserver workloads. The old patches 1+2 or 1+3 only manages around 10% gain. In the intermediate range of 200-1000, the 2 sets are more comparable in performance gain. Regards, Longman -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists