[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <BANLkTinptaydNvK4ZvGvy0KVLnRmmza7tA@mail.gmail.com>
Date: Thu, 16 Jun 2011 13:47:32 -0700
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Tim Chen <tim.c.chen@...ux.intel.com>
Cc: Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Andi Kleen <ak@...ux.intel.com>,
Shaohua Li <shaohua.li@...el.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Hugh Dickins <hughd@...gle.com>,
KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>,
David Miller <davem@...emloft.net>,
Martin Schwidefsky <schwidefsky@...ibm.com>,
Russell King <rmk@....linux.org.uk>,
Paul Mundt <lethal@...ux-sh.org>,
Jeff Dike <jdike@...toit.com>,
Richard Weinberger <richard@....at>,
"Luck, Tony" <tony.luck@...el.com>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
Mel Gorman <mel@....ul.ie>, Nick Piggin <npiggin@...nel.dk>,
Namhyung Kim <namhyung@...il.com>,
"Shi, Alex" <alex.shi@...el.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"Rafael J. Wysocki" <rjw@...k.pl>
Subject: Re: REGRESSION: Performance regressions from switching anon_vma->lock
to mutex
On Thu, Jun 16, 2011 at 1:26 PM, Tim Chen <tim.c.chen@...ux.intel.com> wrote:
>
> I ran exim with different kernel versions. Using 2.6.39-vanilla
> kernel as a baseline, the results are as follow:
>
> Throughput
> 2.6.39(vanilla) 100.0%
> 2.6.39+ra-patch 166.7% (+66.7%) (note: tmpfs readahead patchset is merged in 3.0-rc2)
> 3.0-rc2(vanilla) 68.0% (-32%)
> 3.0-rc2+linus 115.7% (+15.7%)
> 3.0-rc2+linus+softirq 86.2% (-17.3%)
Ok, so batching the semaphore operations makes more of a difference
than I would have expected.
I guess I'll cook up an improved patch that does it for the vma exit
case too, and see if that just makes the semaphores be a non-issue.
> I also notice that the run to run variations have increased quite a bit for 3.0-rc2.
> I'm using 6 runs per kernel. Perhaps a side effect of converting the anon_vma->lock to mutex?
So the thing about using the mutexes is that heavy contention on a
spinlock is very stable: it may be *slow*, but it's reliable, nicely
queued, and has very few surprises.
On a mutex, heavy contention results in very subtle behavior, with the
adaptive spinning often - but certainly not always - making the mutex
act as a spinlock, but once you have lots of contention the adaptive
spinning breaks down. And then you have lots of random interactions
with the scheduler and 'need_resched' etc.
The only valid answer to lock contention is invariably always just
"don't do that then". We've been pretty good at getting rid of
problematic locks, but this one clearly isn't one of the ones we've
fixed ;)
Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists