[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200803111928.20043.nickpiggin@yahoo.com.au>
Date: Tue, 11 Mar 2008 19:28:19 +1100
From: Nick Piggin <nickpiggin@...oo.com.au>
To: Ingo Molnar <mingo@...e.hu>
Cc: LKML <linux-kernel@...r.kernel.org>
Subject: Re: Poor PostgreSQL scaling on Linux 2.6.25-rc5 (vs 2.6.22)
On Tuesday 11 March 2008 18:58, Ingo Molnar wrote:
> * Nick Piggin <nickpiggin@...oo.com.au> wrote:
> > PostgreSQL is different. It has zero idle time when running this
> > workload. It actually scaled "super linearly" on my system here, from
> > single threaded performance to 8 cores (giving an 8.2x performance
> > increase)!
> >
> > So PostgreSQL performance profile is actually much more interesting.
> > To my dismay, I found that Linux 2.6.25-rc5 performs really badly
> > after saturating the runqueues and subsequently increasing threads.
> > 2.6.22 drops a little bit, but basically settles near the peak
> > performance. With 2.6.25-rc5, throughput seems to be falling off
> > linearly with the number of threads.
>
> thanks Nick, i'll check this
Thanks.
> - and i agree that this very much looks
> like a scheduler regression.
I'd say it is. Quite a nasty one too: if your server gets nudged over
the edge of the cliff, it goes into a feedback loop and goes splat at
the bottom somewhere ;)
> Just a quick suggestion, does a simple
> runtime tune like this fix the workload:
>
> for N in /proc/sys/kernel/sched_domain/*/*/flags; do
> echo $[`cat $N`|16] > N
> done
>
> this sets SD_WAKE_IDLE for all the nodes in the scheduler domains tree.
> (doing this results in over-agressive idle balancing - but if this fixes
> your testcase it shows that we were balancing under-agressively for this
> workload.) Thanks,
It doesn't change anything.
There is no idle time for this workload, btw.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists