[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170504173901.GB7288@htj.duckdns.org>
Date: Thu, 4 May 2017 13:39:01 -0400
From: Tejun Heo <tj@...nel.org>
To: Dietmar Eggemann <dietmar.eggemann@....com>
Cc: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>, linux-kernel@...r.kernel.org,
Linus Torvalds <torvalds@...ux-foundation.org>,
Vincent Guittot <vincent.guittot@...aro.org>,
Mike Galbraith <efault@....de>, Paul Turner <pjt@...gle.com>,
Chris Mason <clm@...com>, kernel-team@...com
Subject: Re: [PATCH v2 1/2] sched/fair: Fix how load gets propagated from
cfs_rq to its sched_entity
Hello, Dietmar.
On Thu, May 04, 2017 at 10:49:51AM +0100, Dietmar Eggemann wrote:
> On 04/05/17 07:21, Peter Zijlstra wrote:
> > On Thu, May 04, 2017 at 07:51:29AM +0200, Peter Zijlstra wrote:
> >
> >> Urgh, and my numbers were so pretty :/
> >
> > Just to clarify on how to run schbench, I limited to a single socket (as
> > that is what you have) and set -t to the number of cores in the socket
> > (not the number of threads).
> >
> > Furthermore, my machine is _idle_, if I don't do anything, it doesn't do
> > _anything_.
> >
>
> I can't recreate this problem running 'numactl -N 0 ./schbench -m 2 -t
> 10 -s 10000 -c 15000 -r 30' on my E5-2690 v2 (IVB-EP, 2 sockets, 10
> cores / socket, 2 threads / core)
>
> I tried tip/sched/core comparing running in 'cpu:/' and 'cpu:/foo' and
>
> using your patch on top with all the combinations of {NO_}FUDGE,
> {NO_}FUDGE2 with prop_type=shares_avg or prop_type_runnable.
>
> Where you able to see the issue on tip/sched/core w/o your patch on your
> machine?
>
> The workload of n 60% periodic tasks on n logical cpus always creates a
> very stable task distribution for me.
It depends heavily on what else is going on in the system. On the
test systems that I'm using, there's always something not-too-heavy
going on. The pattern over time isn't too varied and the latency
results are usually stable and the grouping of results is very clear
as the difference between the load balancer working properly and not
shows up as upto an order of magnitude difference in p99 latencies.
For these differences to matter, you need to push the machine so that
it's right at the point of saturation - e.g. increase duty cycle till
p99 starts to deteriorate w/o cgroup.
Thanks.
--
tejun
Powered by blists - more mailing lists