lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Fri, 5 May 2017 11:36:03 +0100
From:   Dietmar Eggemann <dietmar.eggemann@....com>
To:     Tejun Heo <tj@...nel.org>
Cc:     Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...hat.com>, linux-kernel@...r.kernel.org,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Mike Galbraith <efault@....de>, Paul Turner <pjt@...gle.com>,
        Chris Mason <clm@...com>, kernel-team@...com
Subject: Re: [PATCH v2 1/2] sched/fair: Fix how load gets propagated from
 cfs_rq to its sched_entity

Hi Tejun,

On 04/05/17 18:39, Tejun Heo wrote:
> Hello, Dietmar.
> 
> On Thu, May 04, 2017 at 10:49:51AM +0100, Dietmar Eggemann wrote:
>> On 04/05/17 07:21, Peter Zijlstra wrote:
>>> On Thu, May 04, 2017 at 07:51:29AM +0200, Peter Zijlstra wrote:

[...]

>>
>> I can't recreate this problem running 'numactl -N 0 ./schbench -m 2 -t
>> 10 -s 10000 -c 15000 -r 30' on my E5-2690 v2 (IVB-EP, 2 sockets, 10
>> cores / socket, 2 threads / core)
>>
>> I tried tip/sched/core comparing running in 'cpu:/' and 'cpu:/foo' and
>>
>> using your patch on top with all the combinations of {NO_}FUDGE,
>> {NO_}FUDGE2 with prop_type=shares_avg or prop_type_runnable.
>>
>> Where you able to see the issue on tip/sched/core w/o your patch on your
>> machine?
>>
>> The workload of n 60% periodic tasks on n logical cpus always creates a
>> very stable task distribution for me.
> 
> It depends heavily on what else is going on in the system.  On the
> test systems that I'm using, there's always something not-too-heavy
> going on.  The pattern over time isn't too varied and the latency
> results are usually stable and the grouping of results is very clear
> as the difference between the load balancer working properly and not
> shows up as upto an order of magnitude difference in p99 latencies.

OK, that make sense. You do need the light (independent from schbench)
background noise to create work for the load balancer.

I switched to my Hikey board (hot-plugged out the 2. cluster, so 4
remaining cores with performance governor) because we should see the
effect regardless of the topology. There is no background noise on my
debian fs.

That's why I don't see any effect if I increase the C/S
(cputime/sleeptime) ratio when running 'schbench -m 2 -t 2 -s S -c C -r
30'. The only source of disturbance are some additional schbench threads
which sometimes force one of the worker threads to get co-scheduled with
another worker thread.

https://drive.google.com/file/d/0B2f-ZAwV_YnmTDhWUk5ZRHdBRUU/view shows
such a case where the additional schbench thread 'schbench-2206' (green
marker line in picture) forces the worker thread 'schbench-2209' to
wakeup migrate from cpu3 to cpu0 where he gets co-scheduled with the
worker thread 'schbench-2210' for a while.

> For these differences to matter, you need to push the machine so that
> it's right at the point of saturation - e.g. increase duty cycle till
> p99 starts to deteriorate w/o cgroup.
> 
> Thanks.
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ