[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200629162633.8800-1-qais.yousef@arm.com>
Date: Mon, 29 Jun 2020 17:26:31 +0100
From: Qais Yousef <qais.yousef@....com>
To: Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>
Cc: Valentin Schneider <valentin.schneider@....com>,
Qais Yousef <qais.yousef@....com>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Patrick Bellasi <patrick.bellasi@...bug.net>,
Chris Redpath <chris.redpath@....com>,
Lukasz Luba <lukasz.luba@....com>, linux-kernel@...r.kernel.org
Subject: [PATCH v5 0/2] sched: Optionally skip uclamp logic in fast path
This series attempts to address the report that uclamp logic could be expensive
sometimes and shows a regression in netperf UDP_STREAM under certain
conditions.
The first patch is a fix for how struct uclamp_rq is initialized which is
required by the 2nd patch which contains the real 'fix'.
Worth noting that the root cause of the overhead is believed to be system
specific or related to potential certain code/data layout issues, leading to
worse I/D $ performance.
Different systems exhibited different behaviors and the regression did
disappear in certain kernel version while attempting to reporoduce.
More info can be found here:
https://lore.kernel.org/lkml/20200616110824.dgkkbyapn3io6wik@e107158-lin/
Having the static key seemed the best thing to do to ensure the effect of
uclamp is minimized for kernels that compile it in but don't have a userspace
that uses it, which will allow distros to distribute uclamp capable kernels by
default without having to compromise on performance for some systems that could
be affected.
Changes in v5:
* Fix a race that could happen when order of enqueue/dequeue of tasks
A and B is not done in order, and sched_uclamp_used is enabled in
between.
* Add more comments explaining the race and the behavior of
uclamp_rq_util_with() which is now protected with a static key to be
a NOP. When no uclamp aggregation at rq level is done, this function
can't do much.
Changes in v4:
* Fix broken boosting of RT tasks when static key is disabled.
Changes in v3:
* Avoid double negatives and rename the static key to uclamp_used
* Unconditionally enable the static key through any of the paths where
the user can modify the default uclamp value.
* Use C99 named struct initializer for struct uclamp_rq which is easier
to read than the memset().
Changes in v2:
* Add more info in the commit message about the result of perf diff to
demonstrate that the activate/deactivate_task pressure is reduced in
the fast path.
* Fix sparse warning reported by the test robot.
* Add an extra commit about using static_branch_likely() instead of
static_branc_unlikely().
Thanks
--
Qais Yousef
Cc: Juri Lelli <juri.lelli@...hat.com>
Cc: Vincent Guittot <vincent.guittot@...aro.org>
Cc: Dietmar Eggemann <dietmar.eggemann@....com>
Cc: Steven Rostedt <rostedt@...dmis.org>
Cc: Ben Segall <bsegall@...gle.com>
Cc: Mel Gorman <mgorman@...e.de>
CC: Patrick Bellasi <patrick.bellasi@...bug.net>
Cc: Chris Redpath <chris.redpath@....com>
Cc: Lukasz Luba <lukasz.luba@....com>
Cc: linux-kernel@...r.kernel.org
Qais Yousef (2):
sched/uclamp: Fix initialization of struct uclamp_rq
sched/uclamp: Protect uclamp fast path code with static key
kernel/sched/core.c | 86 +++++++++++++++++++++++++++++---
kernel/sched/cpufreq_schedutil.c | 2 +-
kernel/sched/sched.h | 39 ++++++++++++++-
3 files changed, 117 insertions(+), 10 deletions(-)
--
2.17.1
Powered by blists - more mailing lists