[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKfTPtDh+D9AdzcsjYuv8LmtWag2MaHx7Ysrxb7JQittKa_K0A@mail.gmail.com>
Date: Wed, 24 Jun 2020 11:00:06 +0200
From: Vincent Guittot <vincent.guittot@...aro.org>
To: Dietmar Eggemann <dietmar.eggemann@....com>,
Qais Yousef <qais.yousef@....com>
Cc: Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Patrick Bellasi <patrick.bellasi@...bug.net>,
Chris Redpath <chris.redpath@....com>,
Lukasz Luba <lukasz.luba@....com>,
linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2 0/2] sched: Optionally skip uclamp logic in fast path
On Tue, 23 Jun 2020 at 19:40, Dietmar Eggemann <dietmar.eggemann@....com> wrote:
>
> On 19/06/2020 19:20, Qais Yousef wrote> This series attempts to address the report that uclamp logic could be expensive
> > sometimes and shows a regression in netperf UDP_STREAM under certain
> > conditions.
> >
> > The first patch is a fix for how struct uclamp_rq is initialized which is
> > required by the 2nd patch which contains the real 'fix'.
> >
> > Worth noting that the root cause of the overhead is believed to be system
> > specific or related to potential certain code/data layout issues, leading to
> > worse I/D $ performance.
> >
> > Different systems exhibited different behaviors and the regression did
> > disappear in certain kernel version while attempting to reporoduce.
> >
> > More info can be found here:
> >
> > https://lore.kernel.org/lkml/20200616110824.dgkkbyapn3io6wik@e107158-lin/
> >
> > Having the static key seemed the best thing to do to ensure the effect of
> > uclamp is minimized for kernels that compile it in but don't have a userspace
> > that uses it, which will allow distros to distribute uclamp capable kernels by
> > default without having to compromise on performance for some systems that could
> > be affected.
>
> My test data indicates that the static key w/o any uclamp users (3)
> brings the performance number for the 'perf bench sched pipe'
> workload back (i.e. from w/ !CONFIG_UCLAMP_TASK) (1).
>
> platform:
>
> Arm64 Hikey960 (only little CPUs [0-3]), no CPUidle,
> performance CPUfreq governor
>
> workload:
>
> perf stat -n -r 20 -- perf bench sched pipe -T -l 100000
>
>
> (A) *** Performance results ***
>
> (1) tip/sched/core
> # CONFIG_UCLAMP_TASK is not set
>
> *1.39285* +- 0.00191 seconds time elapsed ( +- 0.14% )
>
> (2) tip/sched/core
> CONFIG_UCLAMP_TASK=y
>
> *1.42877* +- 0.00181 seconds time elapsed ( +- 0.13% )
>
> (3) tip/sched/core + opt_skip_uclamp_v2
> CONFIG_UCLAMP_TASK=y
>
> *1.38833* +- 0.00291 seconds time elapsed ( +- 0.21% )
>
> (4) tip/sched/core + opt_skip_uclamp_v2
> CONFIG_UCLAMP_TASK=y
> echo 512 > /proc/sys/kernel/sched_util_clamp_min (enable uclamp)
>
> *1.42062* +- 0.00238 seconds time elapsed ( +- 0.17% )
>
>
> (B) *** Profiling on CPU0 and CPU1 ***
>
> (further hp'ing out CPU2 and CPU3 to get consistent hit numbers)
>
> (1)
>
> CPU0: Function Hit Time Avg s^2
> -------- --- ---- --- ---
> deactivate_task 1997346 2207642 us *1.105* us 0.033 us
> activate_task 1997391 1840057 us *0.921* us 0.054 us
>
> CPU1: Function Hit Time Avg s^2
> -------- --- ---- --- ---
> deactivate_task 1997455 2225960 us 1.114 us 0.034 us
> activate_task 1997410 1842603 us 0.922 us 0.052 us
>
> (2)
>
> CPU0: Function Hit Time Avg s^2
> -------- --- ---- --- ---
> deactivate_task 1998538 2419719 us *1.210* us 0.061 us
> activate_task 1997119 1960401 us *0.981* us 0.034 us
>
> CPU1: Function Hit Time Avg s^2
> -------- --- ---- --- ---
> deactivate_task 1996597 2400760 us 1.202 us 0.059 us
> activate_task 1998016 1985013 us 0.993 us 0.028 us
>
> (3)
>
> CPU0: Function Hit Time Avg s^2
> -------- --- ---- --- ---
> deactivate_task 1997525 2155416 us *1.079* us 0.020 us
> activate_task 1997874 1899002 us *0.950* us 0.044 us
>
> CPU1: Function Hit Time Avg s^2
> -------- --- ---- --- ---
> deactivate_task 1997935 2118648 us 1.060 us 0.017 us
> activate_task 1997586 1895162 us 0.948 us 0.044 us
>
> (4)
>
> CPU0: Function Hit Time Avg s^2
> -------- --- ---- --- ---
> deactivate_task 1998246 2428121 us *1.215* us 0.062 us
> activate_task 1998252 2132141 us *1.067* us 0.020 us
>
> CPU1: Function Hit Time Avg s^2
> -------- --- ---- --- ---
> deactivate_task 1996154 2414194 us 1.209 us 0.060 us
> activate_task 1996148 2140667 us 1.072 us 0.021 us
I have rerun the tests that I ran previously on my octo core arm64 (hikey):
20 iteration of perf bench sched pipe -T -l 50000
tip stands for tip/sched/core
uclamp enabled means both uclamp task and uclamp cgroup
the stdev is around 0.25% for all tests
root level 1 level 2 level 3
tip uclamp disabled 50653 47188 44568 41925
tip uclamp enabled 48800(-3.66%) 45600(-3.37%) 42822(-3.92%)
40257(-3.98%)
/w patch uclamp disabled 50615(-0.08%) 47198(+0.02%) 44609(+0.09%)
41735(-0.45%)
/w patch uclamp enabled 49661(-1.96%) 46611(-1.22%) 43803(-1.72%)
41243(-1.63%)
Results are better with your patch
Powered by blists - more mailing lists