lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <02c93bb0-ad3c-9a01-0351-e4ee6e56bf1b@arm.com>
Date:   Thu, 4 May 2017 10:49:51 +0100
From:   Dietmar Eggemann <dietmar.eggemann@....com>
To:     Peter Zijlstra <peterz@...radead.org>, Tejun Heo <tj@...nel.org>
Cc:     Ingo Molnar <mingo@...hat.com>, linux-kernel@...r.kernel.org,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Mike Galbraith <efault@....de>, Paul Turner <pjt@...gle.com>,
        Chris Mason <clm@...com>, kernel-team@...com
Subject: Re: [PATCH v2 1/2] sched/fair: Fix how load gets propagated from
 cfs_rq to its sched_entity

On 04/05/17 07:21, Peter Zijlstra wrote:
> On Thu, May 04, 2017 at 07:51:29AM +0200, Peter Zijlstra wrote:
> 
>> Urgh, and my numbers were so pretty :/
> 
> Just to clarify on how to run schbench, I limited to a single socket (as
> that is what you have) and set -t to the number of cores in the socket
> (not the number of threads).
> 
> Furthermore, my machine is _idle_, if I don't do anything, it doesn't do
> _anything_.
>

I can't recreate this problem running 'numactl -N 0 ./schbench -m 2 -t
10 -s 10000 -c 15000 -r 30' on my E5-2690 v2 (IVB-EP, 2 sockets, 10
cores / socket, 2 threads / core)

I tried tip/sched/core comparing running in 'cpu:/' and 'cpu:/foo' and

using your patch on top with all the combinations of {NO_}FUDGE,
{NO_}FUDGE2 with prop_type=shares_avg or prop_type_runnable.

Where you able to see the issue on tip/sched/core w/o your patch on your
machine?

The workload of n 60% periodic tasks on n logical cpus always creates a
very stable task distribution for me.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ