[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200615081041.GA16990@vingu-book>
Date: Mon, 15 Jun 2020 10:10:41 +0200
From: Vincent Guittot <vincent.guittot@...aro.org>
To: Xing Zhengjun <zhengjun.xing@...ux.intel.com>
Cc: Hillf Danton <hdanton@...a.com>,
kernel test robot <rong.a.chen@...el.com>,
Ingo Molnar <mingo@...nel.org>,
Mel Gorman <mgorman@...hsingularity.net>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Juri Lelli <juri.lelli@...hat.com>,
Steven Rostedt <rostedt@...dmis.org>,
Valentin Schneider <valentin.schneider@....com>,
Phil Auld <pauld@...hat.com>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [LKP] [sched/fair] 070f5e860e: reaim.jobs_per_min -10.5%
regression
Hi Xing,
Le lundi 15 juin 2020 à 15:26:59 (+0800), Xing Zhengjun a écrit :
>
>
> On 6/12/2020 7:06 PM, Hillf Danton wrote:
> >
> > On Fri, 12 Jun 2020 14:36:49 +0800 Xing Zhengjun wrote:
...
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -8215,12 +8215,8 @@ group_has_capacity(unsigned int imbalanc
> > if (sgs->sum_nr_running < sgs->group_weight)
> > return true;
> > - if ((sgs->group_capacity * imbalance_pct) <
> > - (sgs->group_runnable * 100))
> > - return false;
> > -
> > - if ((sgs->group_capacity * 100) >
> > - (sgs->group_util * imbalance_pct))
> > + if ((sgs->group_capacity * 100) > (sgs->group_util * imbalance_pct) &&
> > + (sgs->group_capacity * 100) > (sgs->group_runnable * imbalance_pct))
> > return true;
> > return false;
> > @@ -8240,12 +8236,8 @@ group_is_overloaded(unsigned int imbalan
> > if (sgs->sum_nr_running <= sgs->group_weight)
> > return false;
> > - if ((sgs->group_capacity * 100) <
> > - (sgs->group_util * imbalance_pct))
> > - return true;
> > -
> > - if ((sgs->group_capacity * imbalance_pct) <
> > - (sgs->group_runnable * 100))
> > + if ((sgs->group_capacity * 100) < (sgs->group_util * imbalance_pct) ||
> > + (sgs->group_capacity * 100) < (sgs->group_runnable * imbalance_pct))
> > return true;
> > return false;
> >
>
> I apply the patch based on v5.7, the regression still existed.
Thanks for the test. I don't know if it's relevant or not but the results seem a bit
better with the patch and I'd like to check that it's only a matter of threshold to
fix the problem.
Could you try the patch below which is quite aggressive but will help to confirm this ?
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 28be1c984a42..3c51d557547b 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8322,10 +8322,13 @@ static inline int sg_imbalanced(struct sched_group *group)
static inline bool
group_has_capacity(unsigned int imbalance_pct, struct sg_lb_stats *sgs)
{
+ unsigned long imb;
+
if (sgs->sum_nr_running < sgs->group_weight)
return true;
- if ((sgs->group_capacity * imbalance_pct) <
+ imb = sgs->sum_nr_running * 100;
+ if ((sgs->group_capacity * imb) <
(sgs->group_runnable * 100))
return false;
@@ -8347,6 +8350,8 @@ group_has_capacity(unsigned int imbalance_pct, struct sg_lb_stats *sgs)
static inline bool
group_is_overloaded(unsigned int imbalance_pct, struct sg_lb_stats *sgs)
{
+ unsigned long imb;
+
if (sgs->sum_nr_running <= sgs->group_weight)
return false;
@@ -8354,7 +8359,8 @@ group_is_overloaded(unsigned int imbalance_pct, struct sg_lb_stats *sgs)
(sgs->group_util * imbalance_pct))
return true;
- if ((sgs->group_capacity * imbalance_pct) <
+ imb = sgs->sum_nr_running * 100;
+ if ((sgs->group_capacity * imb) <
(sgs->group_runnable * 100))
return true;
>
> =========================================================================================
> tbox_group/testcase/rootfs/kconfig/compiler/runtime/nr_task/debug-setup/test/cpufreq_governor/ucode:
>
> lkp-ivb-d04/reaim/debian-x86_64-20191114.cgz/x86_64-rhel-7.6/gcc-7/300s/100%/test/five_sec/performance/0x21
>
> commit:
> 9f68395333ad7f5bfe2f83473fed363d4229f11c
> 070f5e860ee2bf588c99ef7b4c202451faa48236
> v5.7
> 6b33257768b8dd3982054885ea310871be2cfe0b (Hillf's patch)
>
> 9f68395333ad7f5b 070f5e860ee2bf588c99ef7b4c2 v5.7
> 6b33257768b8dd3982054885ea3
> ---------------- --------------------------- ---------------------------
> ---------------------------
> %stddev %change %stddev %change %stddev %change
> %stddev
> \ | \ | \
> | \
> 0.69 -10.3% 0.62 -9.1% 0.62
> -10.1% 0.62 reaim.child_systime
> 0.62 -1.0% 0.61 +0.5% 0.62
> +0.3% 0.62 reaim.child_utime
> 66870 -10.0% 60187 -7.6% 61787
> -8.3% 61305 reaim.jobs_per_min
> 16717 -10.0% 15046 -7.6% 15446
> -8.3% 15326 reaim.jobs_per_min_child
> 97.84 -1.1% 96.75 -0.4% 97.43
> -0.5% 97.37 reaim.jti
> 72000 -10.8% 64216 -8.3% 66000
> -8.3% 66000 reaim.max_jobs_per_min
> 0.36 +10.6% 0.40 +7.8% 0.39
> +9.4% 0.39 reaim.parent_time
> 1.58 ± 2% +71.0% 2.70 ± 2% +26.9% 2.01 ± 2%
> +33.2% 2.11 reaim.std_dev_percent
> 0.00 ± 5% +110.4% 0.01 ± 3% +48.8% 0.01 ± 7%
> +65.3% 0.01 ± 3% reaim.std_dev_time
> 50800 -2.4% 49600 -1.6% 50000
> -1.8% 49866 reaim.workload
>
>
>
> --
> Zhengjun Xing
Powered by blists - more mailing lists