lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201026151846.GA17073@vingu-book>
Date:   Mon, 26 Oct 2020 16:18:46 +0100
From:   Vincent Guittot <vincent.guittot@...aro.org>
To:     Chris Mason <clm@...com>
Cc:     Peter Zijlstra <peterz@...radead.org>,
        Johannes Weiner <hannes@...xchg.org>,
        Rik van Riel <riel@...riel.com>,
        linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] fix scheduler regression from "sched/fair: Rework
 load_balance()"

Le lundi 26 oct. 2020 à 11:05:35 (-0400), Chris Mason a écrit :
> 
> 
> On 26 Oct 2020, at 10:24, Vincent Guittot wrote:
> 
> > Le lundi 26 oct. 2020 à 08:45:27 (-0400), Chris Mason a écrit :
> > > On 26 Oct 2020, at 4:39, Vincent Guittot wrote:
> > > 
> > > > Hi Chris
> > > > 
> > > > On Sat, 24 Oct 2020 at 01:49, Chris Mason <clm@...com> wrote:
> > > > > 
> > > > > Hi everyone,
> > > > > 
> > > > > We’re validating a new kernel in the fleet, and compared
> > > > > with v5.2,
> > > > 
> > > > Which version are you using ?
> > > > several improvements have been added since v5.5 and the rework of
> > > > load_balance
> > > 
> > > We’re validating v5.6, but all of the numbers referenced in this
> > > patch are
> > > against v5.9.  I usually try to back port my way to victory on this
> > > kind of
> > > thing, but mainline seems to behave exactly the same as 0b0695f2b34a
> > > wrt
> > > this benchmark.
> > 
> > ok. Thanks for the confirmation
> > 
> > I have been able to reproduce the problem on my setup.
> 
> Thanks for taking a look!  Can I ask what parameters you used on schbench,
> and what kind of results you saw?  Mostly I’m trying to make sure it’s a
> useful tool, but also the patch didn’t change things here.
> 

with latest tip/sched/core on my dual quad cores:
schbench -t 4 -r 10 -c 1000000 -s 1000
Latency percentiles (usec)
50.0th: 16
75.0th: 23
90.0th: 32
95.0th: 41
*99.0th: 15120
99.5th: 15120
99.9th: 15120
min=0, max=15130

with the patch :
schbench -t 4 -r 10 -c 1000000 -s 1000 
Latency percentiles (usec)
50.0th: 28
75.0th: 32
90.0th: 36
95.0th: 56
*99.0th: 1310
99.5th: 1310
99.9th: 1310
min=0, max=1309

> > 
> > Could you try the fix below ?
> > 
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -9049,7 +9049,8 @@ static inline void calculate_imbalance(struct
> > lb_env *env, struct sd_lb_stats *s
> >          * emptying busiest.
> >          */
> >         if (local->group_type == group_has_spare) {
> > -               if (busiest->group_type > group_fully_busy) {
> > +               if ((busiest->group_type > group_fully_busy) &&
> > +                   (busiest->group_weight > 1)) {
> >                         /*
> >                          * If busiest is overloaded, try to fill spare
> >                          * capacity. This might end up creating spare
> > capacity
> > 
> > 
> > When we calculate an imbalance at te smallest level, ie between CPUs
> > (group_weight == 1),
> > we should try to spread tasks on cpus instead of trying to fill spare
> > capacity.
> 
> With this patch on top of v5.9, my latencies are unchanged.  I’m building
> against current Linus now just in case I’m missing other fixes.
> 

I can't remember any changes in mainline that would make a difference

I had another way to fix it but it could impact more other UC and the improvement
was smaller

---
 kernel/sched/fair.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index ebe15e36f336..415927885228 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7707,7 +7707,7 @@ static int detach_tasks(struct lb_env *env)
 		case migrate_util:
 			util = task_util_est(p);

-			if (util > env->imbalance)
+			if ((util >> env->sd->nr_balance_failed) > env->imbalance)
 				goto next;

 			env->imbalance -= util;
--


>
> -chris

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ