[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1347770100.6952.31.camel@marge.simpson.net>
Date: Sun, 16 Sep 2012 06:35:00 +0200
From: Mike Galbraith <efault@....de>
To: Alan Cox <alan@...rguk.ukuu.org.uk>
Cc: Andi Kleen <andi@...stfloor.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Borislav Petkov <bp@...en8.de>,
Nikolay Ulyanitsky <lystor@...il.com>,
linux-kernel@...r.kernel.org,
Andreas Herrmann <andreas.herrmann3@....com>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Andrew Morton <akpm@...ux-foundation.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...nel.org>
Subject: Re: 20% performance drop on PostgreSQL 9.2 from kernel 3.5.3 to
3.6-rc5 on AMD chipsets - bisected
On Sat, 2012-09-15 at 22:32 +0100, Alan Cox wrote:
> > Yes, postgress performs loads better with it's spinlocks, but due to
> > that, it necessarily _hates_ preemption. How the is the scheduler
> > supposed to know that any specific userland task _really_ shouldn't be
> > preempted at any specific time, else bad things follow?
>
> You provide a shared page for a process group so it can write hints to
> which is kernel mapped so the scheduler can peek..
Or perhaps a flag ala SCHED_RESET_ON_FORK to provide a not necessarily
followed hint. That hint could be to simply always try the LAST_BUDDY
thing with flagged tasks, since we know that works (postgress inspired
LAST_BUDDY). Even with postgress like things, fast mover kthreads etc
punching through isn't necessarily a bad thing, you just need to avoid
the punch leaving a gigantic hole.
Oh, while I'm thinking about it, there's another scenario that could
cause the select_idle_sibling() change to affect pgbench on largeish
packages, but it boils down to preemption odds as well. IIRC pgbench
_was_ at least 1:N, ie one process driving the whole load. Waker of
many (singularly bad idea as a way to generate load) being preempted by
it's wakees stalls the whole load, so expensive spreading of wakees to
the four winds ala WAKE_BALANCE becomes attractive, that pain being
markedly less intense than having multiple cores go idle while creator
or work waits for one.
-Mike
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists