[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <BANLkTin_46==epHKUbWJ55bt3mPaJieV2Q@mail.gmail.com>
Date: Thu, 16 Jun 2011 13:37:47 -0700
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Andi Kleen <ak@...ux.intel.com>
Cc: Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Tim Chen <tim.c.chen@...ux.intel.com>,
Shaohua Li <shaohua.li@...el.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Hugh Dickins <hughd@...gle.com>,
KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>,
David Miller <davem@...emloft.net>,
Martin Schwidefsky <schwidefsky@...ibm.com>,
Russell King <rmk@....linux.org.uk>,
Paul Mundt <lethal@...ux-sh.org>,
Jeff Dike <jdike@...toit.com>,
Richard Weinberger <richard@....at>,
"Luck, Tony" <tony.luck@...el.com>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
Mel Gorman <mel@....ul.ie>, Nick Piggin <npiggin@...nel.dk>,
Namhyung Kim <namhyung@...il.com>,
"Shi, Alex" <alex.shi@...el.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"Rafael J. Wysocki" <rjw@...k.pl>
Subject: Re: REGRESSION: Performance regressions from switching anon_vma->lock
to mutex
On Thu, Jun 16, 2011 at 1:14 PM, Andi Kleen <ak@...ux.intel.com> wrote:
>
> I haven't analyzed it in detail, but I suspect it's some cache line bounce,
> which
> can slow things down quite a lot. Also the total number of invocations
> is quite high (hundreds of messages per core * 32 cores)
The fact is, glibc is just total crap.
I tried to send uli a patch to just add caching. No go. I sent
*another* patch to at least make glibc use a sane interface (and the
cache if it needs to fall back on /proc/stat for some legacy reason).
We'll see what happens.
Paul Eggbert suggested "caching for one second" - by just calling
"gettimtofday()" to see how old the cache is. That would work too.
The point I'm making is that it really is a glibc problem. Glibc is
doing stupid expensive things, and not trying to correct for the fact
that it's expensive.
> I did, but I gave up fully following that code path because it's so
> convoluted :-/
I do agree that glibc sources are incomprehensible, with multiple
layers of abstraction (sysdeps, "posix", helper functions etc etc).
In this case it was really trivial to find the culprit with a simple
git grep /proc/stat
though. The code is crap. It's insane. It's using
/sys/devices/system/cpu for _SC_NPROCESSORS_CONF, which is at least a
reasonable interface to use. But it does it in odd ways, and actually
counts the CPU's by doing a readdir call. And it doesn't cache the
result, even though that particular result had better be 100% stable -
it has nothing to do with "online" vs "offline" etc.
But then for _SC_NPROCESSORS_ONLN, it doesn't actually use
/sys/devices/system/cpu at all, but the /proc/stat interface. Which is
slow, mostly because it has all the crazy interrupt stuff in it, but
also because it has lots of legacy stuff.
I wrote a _much_ cleaner routine (loosely based on what we do in
tools/prof) to just parse /sys/devices/system/cpu/online. I didn't
even time it, but I can almost guarantee that it's an order of
magnitude faster than /proc/stat. And if that doesn't work, you can
fall back on a cached version of the /proc/stat parsing, since if
those files don't exist, you can forget about CPU hotplug.
> So you mean caching it at startup time? Otherwise the parent would
> need to do sysconf() at least , which it doesn't do (the exim source doesn't
> really know anything about libdb internals)
Even if you do it in the children, it will help. At least it would be
run just _once_ per fork.
But actually looking at glibc just shows that they are simply doing
stupid things. And I absolutely _refuse_ to add new interfaces to the
kernel only because glibc is being a moron.
Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists