[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKfTPtAEwFMbobPDXTShBezZhwvxkNPTMWfb3BUfiExaCaq9pg@mail.gmail.com>
Date: Thu, 4 May 2017 21:02:39 +0200
From: Vincent Guittot <vincent.guittot@...aro.org>
To: Tejun Heo <tj@...nel.org>
Cc: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
linux-kernel <linux-kernel@...r.kernel.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Mike Galbraith <efault@....de>, Paul Turner <pjt@...gle.com>,
Chris Mason <clm@...com>, kernel-team@...com
Subject: Re: [PATCH 2/2] sched/fair: Always propagate runnable_load_avg
Hi Tejun,
On 4 May 2017 at 19:43, Tejun Heo <tj@...nel.org> wrote:
> Hello,
>
> On Thu, May 04, 2017 at 10:19:46AM +0200, Vincent Guittot wrote:
>> > schbench inside a cgroup and have some base load, it is actually
>> > expected to show worse latency. You need to give higher weight to the
>> > cgroup matching the number of active threads (to be accruate, scaled
>> > by duty cycle but shouldn't matter too much in practice).
>>
>> I don't have to change anything cgroup weight with mainline to get
>> good number which means that the base load which is quite close to
>> null, is probably not the problem
>
> So, while that *could* be the case, it could also be the baseline
> incorrectly favoring the nested cfs_rqs over other tasks because of
> the nested runnables being inflated with blocked load avgs. I think
> it'd be a good idea to test with matching weight to put things on the
> even ground.
In the trace i have uploaded, you will see that regressions happen
whereas there is no other runnable threads around so it's not a matter
of background activities that disturbs schbench
Thanks
Vincent
>
> Thanks.
>
> --
> tejun
Powered by blists - more mailing lists