[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1347727001.7029.37.camel@marge.simpson.net>
Date: Sat, 15 Sep 2012 18:36:41 +0200
From: Mike Galbraith <efault@....de>
To: Andi Kleen <andi@...stfloor.org>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
Borislav Petkov <bp@...en8.de>,
Nikolay Ulyanitsky <lystor@...il.com>,
linux-kernel@...r.kernel.org,
Andreas Herrmann <andreas.herrmann3@....com>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Andrew Morton <akpm@...ux-foundation.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...nel.org>
Subject: Re: 20% performance drop on PostgreSQL 9.2 from kernel 3.5.3 to
3.6-rc5 on AMD chipsets - bisected
On Sat, 2012-09-15 at 09:16 -0700, Andi Kleen wrote:
> Mike Galbraith <efault@....de> writes:
> >
> > The only reason I can think of why pgbench might suffer is postgres's
> > userspace spinlocks. If you always look for an idle core, you improve
> > the odds that the wakeup won't preempt a lock holder, sending others
> > into a long spin.
>
> User space spinlocks like this unfortunately have a tendency to break
> with all kinds of scheduler changes. We've seen this frequently too
> with other users. The best bet currently is to use the real time
> scheduler, but with various tweaks to get its overhead down.
Yeah, that's one way, but decidedly sub-optimal.
> Ultimatively the problem is that user space spinlocks with CPU
> oversubcription is a very unstable setup and small changes can
> easily disturb it.
>
> Just using futex is unfortunately not the answer either.
Yes, postgress performs loads better with it's spinlocks, but due to
that, it necessarily _hates_ preemption. How the is the scheduler
supposed to know that any specific userland task _really_ shouldn't be
preempted at any specific time, else bad things follow?
-Mike
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists