lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <be91602a-0243-e094-8c8f-ceed314d10ce@linux.ibm.com>
Date:   Mon, 1 Jul 2019 15:27:54 +0530
From:   Parth Shah <parth@...ux.ibm.com>
To:     Subhra Mazumdar <subhra.mazumdar@...cle.com>,
        linux-kernel@...r.kernel.org
Cc:     peterz@...radead.org, mingo@...hat.com, tglx@...utronix.de,
        steven.sistare@...cle.com, dhaval.giani@...cle.com,
        daniel.lezcano@...aro.org, vincent.guittot@...aro.org,
        viresh.kumar@...aro.org, tim.c.chen@...ux.intel.com,
        mgorman@...hsingularity.net
Subject: Re: [PATCH v3 5/7] sched: SIS_CORE to disable idle core search



On 6/29/19 3:59 AM, Subhra Mazumdar wrote:
> 
> On 6/28/19 12:01 PM, Parth Shah wrote:
>>
>> On 6/27/19 6:59 AM, subhra mazumdar wrote:
>>> Use SIS_CORE to disable idle core search. For some workloads
>>> select_idle_core becomes a scalability bottleneck, removing it improves
>>> throughput. Also there are workloads where disabling it can hurt latency,
>>> so need to have an option.
>>>
>>> Signed-off-by: subhra mazumdar <subhra.mazumdar@...cle.com>
>>> ---
>>>   kernel/sched/fair.c | 8 +++++---
>>>   1 file changed, 5 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>>> index c1ca88e..6a74808 100644
>>> --- a/kernel/sched/fair.c
>>> +++ b/kernel/sched/fair.c
>>> @@ -6280,9 +6280,11 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
>>>       if (!sd)
>>>           return target;
>>>
>>> -    i = select_idle_core(p, sd, target);
>>> -    if ((unsigned)i < nr_cpumask_bits)
>>> -        return i;
>>> +    if (sched_feat(SIS_CORE)) {
>>> +        i = select_idle_core(p, sd, target);
>>> +        if ((unsigned)i < nr_cpumask_bits)
>>> +            return i;
>>> +    }
>> This can have significant performance loss if disabled. The select_idle_core spreads
>> workloads quickly across the cores, hence disabling this leaves much of the work to
>> be offloaded to load balancer to move task across the cores. Latency sensitive
>> and long running multi-threaded workload should see the regression under this conditions.
> Yes in case of SPARC SMT8 I did notice that (see cover letter). That's why
> it is a feature that is ON by default, but can be turned OFF for specific
> workloads on x86 SMT2 that can benefit from it.
>> Also, systems like POWER9 has sd_llc as a pair of core only. So it
>> won't benefit from the limits and hence also hiding your code in select_idle_cpu
>> behind static keys will be much preferred.
> If it doesn't hurt then I don't see the point.
> 

So these is the result from POWER9 system with your patches:
System configuration: 2 Socket, 44 cores, 176 CPUs

Experiment setup: 
===========
=> Setup 1:
- 44 tasks doing just while(1), this is to make select_idle_core return -1 most times
- perf bench sched messaging -g 1 -l 1000000
+-----------+--------+--------------+--------+
| Baseline  | stddev |    Patch     | stddev |
+-----------+--------+--------------+--------+
|       135 |   3.21 | 158(-17.03%) |   4.69 |
+-----------+--------+--------------+--------+

=> Setup 2:
- schbench -m44 -t 1
+=======+==========+=========+=========+==========+                                                                                                                                                                                                                             
| %ile  | Baseline | stddev  |  patch  |  stddev  |                                                                                                                                                                                                                             
+=======+==========+=========+=========+==========+                                                                                                                                                                                                                             
|    50 |       10 |    3.49 |      10 |     2.29 |                                                                                                                                                                                                                             
+-------+----------+---------+---------+----------+                                                                                                                                                                                                                             
|    95 |      467 |    4.47 |     469 |     0.81 |                                                                                                                                                                                                                             
+-------+----------+---------+---------+----------+                                                                                                                                                                                                                             
|    99 |      571 |   21.32 |     584 |    18.69 |                                                                                                                                                                                                                             
+-------+----------+---------+---------+----------+                                                                                                                                                                                                                             
|  99.5 |      629 |   30.05 |     641 |    20.95 |                                                                                                                                                                                                                             
+-------+----------+---------+---------+----------+                                                                                                                                                                                                                             
|  99.9 |      780 |   40.38 |     773 |     44.2 |                                                                                                                                                                                                                             
+-------+----------+---------+---------+----------+

I guess it doesn't make much difference in schbench results but hackbench (perf bench)
seems to have an observable regression.


Best,
Parth

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ