[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1337677268.9698.6.camel@twins>
Date: Tue, 22 May 2012 11:01:08 +0200
From: Peter Zijlstra <a.p.zijlstra@...llo.nl>
To: Martin Schwidefsky <schwidefsky@...ibm.com>
Cc: Ingo Molnar <mingo@...nel.org>, Mike Galbraith <efault@....de>,
linux-kernel@...r.kernel.org,
Heiko Carstens <heiko.carstens@...ibm.com>,
Christian Ehrhardt <ehrhardt@...ux.vnet.ibm.com>
Subject: Re: [PATCH 0/2] RFC: readd fair sleepers for server systems
On Mon, 2012-05-21 at 17:45 +0200, Martin Schwidefsky wrote:
> our performance team found a performance degradation with a recent
> distribution update in regard to fair sleepers (or the lack of fair
> sleepers). On s390 we used to run with fair sleepers disabled.
This change was made a very long time ago.. tell your people to mind
what upstream does if they want us to mind them.
Also, reports like this make me want to make /debug/sched_features a
patch in tip/out-of-tree so that its never available outside
development.
> We see the performance degradation with our network benchmark and fair
> sleepers enabled, the largest hit is on virtual connections:
>
> VM guest Hipersockets
> Throughput degrades up to 18%
> CPU load/cost increase up to 17%
> VM stream
> Throughput degrades up to 15%
> CPU load/cost increase up to 22%
> LPAR Hipersockets
> Throughput degrades up to 27%
> CPU load/cost increase up to 20%
Why is this, is this some weird interaction with your hypervisor?
> In short, we want the fair sleepers tunable back. I understand that on
> x86 we want to avoid the cost of a branch on the hot path in place_entity,
> therefore add a compile time config option for the fair sleeper control.
I'm very much not liking this... this makes s390 schedule completely
different from all the other architectures.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists