lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9021d4d99370162a815928cd6467f4a5@linux.ibm.com>
Date:   Mon, 05 Jun 2023 10:07:16 +0200
From:   Tobias Huschle <huschle@...ux.ibm.com>
To:     Vincent Guittot <vincent.guittot@...aro.org>
Cc:     linux-kernel@...r.kernel.org, mingo@...hat.com,
        peterz@...radead.org, juri.lelli@...hat.com,
        dietmar.eggemann@....com, rostedt@...dmis.org, bsegall@...gle.com,
        mgorman@...e.de, bristot@...hat.com, vschneid@...hat.com,
        sshegde@...ux.vnet.ibm.com, srikar@...ux.vnet.ibm.com,
        linuxppc-dev@...ts.ozlabs.org
Subject: Re: [RFC 1/1] sched/fair: Consider asymmetric scheduler groups in
 load balancer

On 2023-05-16 15:36, Vincent Guittot wrote:
> On Mon, 15 May 2023 at 13:46, Tobias Huschle <huschle@...ux.ibm.com> 
> wrote:
>> 
>> The current load balancer implementation implies that scheduler 
>> groups,
>> within the same domain, all host the same number of CPUs. This is
>> reflected in the condition, that a scheduler group, which is load
>> balancing and classified as having spare capacity, should pull work
>> from the busiest group, if the local group runs less processes than
>> the busiest one. This implies that these two groups should run the
>> same number of processes, which is problematic if the groups are not
>> of the same size.
>> 
>> The assumption that scheduler groups within the same scheduler domain
>> host the same number of CPUs appears to be true for non-s390
>> architectures. Nevertheless, s390 can have scheduler groups of unequal
>> size.
>> 
>> This introduces a performance degredation in the following scenario:
>> 
>> Consider a system with 8 CPUs, 6 CPUs are located on one CPU socket,
>> the remaining 2 are located on another socket:
>> 
>> Socket   -----1-----    -2-
>> CPU      1 2 3 4 5 6    7 8
>> 
>> Placing some workload ( x = one task ) yields the following
>> scenarios:
>> 
>> The first 5 tasks are distributed evenly across the two groups.
>> 
>> Socket   -----1-----    -2-
>> CPU      1 2 3 4 5 6    7 8
>>          x x x          x x
>> 
>> Adding a 6th task yields the following distribution:
>> 
>> Socket   -----1-----    -2-
>> CPU      1 2 3 4 5 6    7 8
>> SMT1     x x x          x x
>> SMT2                    x
> 
> Your description is a bit confusing for me. What you name CPU above
> should be named Core, doesn' it ?
> 
> Could you share with us your scheduler topology ?
> 

You are correct, it should say core instead of CPU.

One actual configuration from one of my test machines (lscpu -e):

CPU NODE DRAWER BOOK SOCKET CORE L1d:L1i:L2 ONLINE CONFIGURED 
POLARIZATION ADDRESS
   0    0      0    0      0    0 0:0:0         yes yes        horizontal 
   0
   1    0      0    0      0    0 1:1:0         yes yes        horizontal 
   1
   2    0      0    0      0    1 2:2:0         yes yes        horizontal 
   2
   3    0      0    0      0    1 3:3:0         yes yes        horizontal 
   3
   4    0      0    0      0    2 4:4:0         yes yes        horizontal 
   4
   5    0      0    0      0    2 5:5:0         yes yes        horizontal 
   5
   6    0      0    0      0    3 6:6:0         yes yes        horizontal 
   6
   7    0      0    0      0    3 7:7:0         yes yes        horizontal 
   7
   8    0      0    0      0    4 8:8:0         yes yes        horizontal 
   8
   9    0      0    0      0    4 9:9:0         yes yes        horizontal 
   9
  10    0      0    0      0    5 10:10:0       yes yes        horizontal 
   10
  11    0      0    0      0    5 11:11:0       yes yes        horizontal 
   11
  12    0      0    0      1    6 12:12:0       yes yes        horizontal 
   12
  13    0      0    0      1    6 13:13:0       yes yes        horizontal 
   13
  14    0      0    0      1    7 14:14:0       yes yes        horizontal 
   14
  15    0      0    0      1    7 15:15:0       yes yes        horizontal 
   15

So, 6 cores / 12 CPUs in one group 2 cores / 4 CPUs in the other.

If I run stress-ng with 8 cpu stressors on the original code I get a 
distribution
like this:

00 01 02 03 04 05 06 07 08 09 10 11  || 12 13 14 15
                 x     x     x     x      x  x  x  x

Which means that the two cores in the smaller group are running into SMT 
while two
cores in the larger group are still idle. This is caused by the 
prefer_sibling path
which really wants to see both groups run the same number of tasks.

>> 
>> The task is added to the 2nd scheduler group, as the scheduler has the
>> assumption that scheduler groups are of the same size, so they should
>> also host the same number of tasks. This makes CPU 7 run into SMT
>> thread, which comes with a performance penalty. This means, that in
>> the window of 6-8 tasks, load balancing is done suboptimally, because
>> SMT is used although there is no reason to do so as fully idle CPUs
>> are still available.
>> 
>> Taking the weight of the scheduler groups into account, ensures that
>> a load balancing CPU within a smaller group will not try to pull tasks
>> from a bigger group while the bigger group still has idle CPUs
>> available.
>> 
>> Signed-off-by: Tobias Huschle <huschle@...ux.ibm.com>
>> ---
>>  kernel/sched/fair.c | 3 ++-
>>  1 file changed, 2 insertions(+), 1 deletion(-)
>> 
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index 48b6f0ca13ac..b1307d7e4065 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -10426,7 +10426,8 @@ static struct sched_group 
>> *find_busiest_group(struct lb_env *env)
>>          * group's child domain.
>>          */
>>         if (sds.prefer_sibling && local->group_type == group_has_spare 
>> &&
>> -           busiest->sum_nr_running > local->sum_nr_running + 1)
>> +           busiest->sum_nr_running * local->group_weight >
>> +                       local->sum_nr_running * busiest->group_weight 
>> + 1)
> 
> This is the prefer_sibling path. Could it be that you should disable
> prefer_siling between your sockets for such topology ? the default
> path compares the number of idle CPUs when groups has spare capacity
> 
> 

If I disable prefer_sibling (I played around with it a bit), I run into 
the problem,
that the tasks are distributed s.t. each group has the same amount of 
idle CPUs, which
yields distributions similar to this:

00 01 02 03 04 05 06 07 08 09 10 11  || 12 13 14 15
     x  x  x     x  x     x     x  x

Now both groups have 4 idle CPUs which fulfills the criteria imposed by 
the load balancer,
but the larger group is now running SMT while the smaller one is just 
idle.

So, in this asymmetric setup, both criteria seem to not work in an 
optimal way. Going for
the same number of idle CPUs or alternatively for the same number of 
running processes
both cause a sub-optimal distribution of tasks, leading to unnecessary 
SMT.

It seems also to be possible to address the regular load balancing path 
by aiming to have the
same unused capacity between groups instead of the same number of idle 
CPUs. This seems to
have been considered in the past, but the choice went in favor of the 
number of idle CPUs.
Since this decision was actively taken already, I focused on the 
prefer_sibling path.

The question now would be how to address this properly (or if I'm 
missing something here).
As mentioned in the cover letter, this was the most simplistic and least 
invasive approach
I could find, others might be more sophisticated but also have some 
side-effects.

I have a bit of a hard time leaving this one as-is, as it just 
introduces two additional
multiplications with no effect for most architectures. Maybe an 
architectures specific
inline function that the compiler can optimize away if not needed?

>>                 goto force_balance;
>> 
>>         if (busiest->group_type != group_overloaded) {
>> --
>> 2.34.1
>> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ