[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <cover.1466184592.git.jpoimboe@redhat.com>
Date: Fri, 17 Jun 2016 12:43:22 -0500
From: Josh Poimboeuf <jpoimboe@...hat.com>
To: Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>
Cc: linux-kernel@...r.kernel.org,
Mel Gorman <mgorman@...hsingularity.net>,
Matt Fleming <matt@...eblueprint.co.uk>,
Srikar Dronamraju <srikar@...ux.vnet.ibm.com>
Subject: [PATCH 0/5] sched/debug: decouple sched_stat tracepoints from CONFIG_SCHEDSTATS
NOTE: I didn't include any performance numbers because I wasn't able to
get consistent results. I tried the following on a Xeon E5-2420 v2 CPU:
$ for i in /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor; do echo -n performance > $i; done
$ echo 1 > /sys/devices/system/cpu/intel_pstate/no_turbo
$ echo 100 > /sys/devices/system/cpu/intel_pstate/min_perf_pct
$ echo 0 > /proc/sys/kernel/nmi_watchdog
$ taskset 0x10 perf stat -n -r10 perf bench sched pipe -l 1000000
I was going to post the numbers from that, both with and without
SCHEDSTATS, but then when I tried to repeat the test on a different day,
the results were surprisingly different, with different conclusions.
So any advice on measuring scheduler performance would be appreciated...
Josh Poimboeuf (5):
sched/debug: rename and move enqueue_sleeper()
sched/debug: schedstat macro cleanup
sched/debug: 'schedstat_val()' -> 'schedstat_val_or_zero()'
sched/debug: remove several CONFIG_SCHEDSTATS guards
sched/debug: decouple 'sched_stat_*' tracepoints' from
CONFIG_SCHEDSTATS
include/linux/sched.h | 11 +-
kernel/latencytop.c | 2 -
kernel/profile.c | 5 -
kernel/sched/core.c | 59 ++++------
kernel/sched/debug.c | 104 +++++++++--------
kernel/sched/fair.c | 290 ++++++++++++++++++++---------------------------
kernel/sched/idle_task.c | 2 +-
kernel/sched/stats.h | 24 ++--
lib/Kconfig.debug | 1 -
9 files changed, 220 insertions(+), 278 deletions(-)
--
2.4.11
Powered by blists - more mailing lists