[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1337615137-55111-1-git-send-email-schwidefsky@de.ibm.com>
Date: Mon, 21 May 2012 17:45:35 +0200
From: Martin Schwidefsky <schwidefsky@...ibm.com>
To: Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Ingo Molnar <mingo@...nel.org>, Mike Galbraith <efault@....de>,
linux-kernel@...r.kernel.org
Cc: Heiko Carstens <heiko.carstens@...ibm.com>,
Christian Ehrhardt <ehrhardt@...ux.vnet.ibm.com>,
Martin Schwidefsky <schwidefsky@...ibm.com>
Subject: [PATCH 0/2] RFC: readd fair sleepers for server systems
our performance team found a performance degradation with a recent
distribution update in regard to fair sleepers (or the lack of fair
sleepers). On s390 we used to run with fair sleepers disabled.
We see the performance degradation with our network benchmark and fair
sleepers enabled, the largest hit is on virtual connections:
VM guest Hipersockets
Throughput degrades up to 18%
CPU load/cost increase up to 17%
VM stream
Throughput degrades up to 15%
CPU load/cost increase up to 22%
LPAR Hipersockets
Throughput degrades up to 27%
CPU load/cost increase up to 20%
Real world workloads are also affected, e.g. we see degrations with oltp
database workloads. Christian has the numbers if needed.
The only workload on s390 with a performance benefit with fair sleepers
enabled are some J2EE workloads, but only in the <2% area.
In short, we want the fair sleepers tunable back. I understand that on
x86 we want to avoid the cost of a branch on the hot path in place_entity,
therefore add a compile time config option for the fair sleeper control.
blue skies,
Martin
Martin Schwidefsky (2):
sched: readd FAIR_SLEEPERS feature
sched: enable FAIR_SLEEPERS for s390
arch/s390/Kconfig | 1 +
init/Kconfig | 3 +++
kernel/sched/fair.c | 14 +++++++++++++-
kernel/sched/features.h | 9 +++++++++
4 files changed, 26 insertions(+), 1 deletion(-)
--
1.7.10.2
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists