[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6248cdfe-c5d1-35f2-003b-3a260b5a94ea@fb.com>
Date: Tue, 25 Apr 2017 17:15:32 -0400
From: Chris Mason <clm@...com>
To: Tejun Heo <tj@...nel.org>,
Vincent Guittot <vincent.guittot@...aro.org>
CC: Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
linux-kernel <linux-kernel@...r.kernel.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Mike Galbraith <efault@....de>, Paul Turner <pjt@...gle.com>,
<kernel-team@...com>
Subject: Re: [PATCH 2/2] sched/fair: Always propagate runnable_load_avg
On 04/25/2017 04:49 PM, Tejun Heo wrote:
> On Tue, Apr 25, 2017 at 11:49:41AM -0700, Tejun Heo wrote:
>> Will try that too. I can't see why HT would change it because I see
>> single CPU queues misevaluated. Just in case, you need to tune the
>> test params so that it doesn't load the machine too much and that
>> there are some non-CPU intensive workloads going on to purturb things
>> a bit. Anyways, I'm gonna try disabling HT.
>
> It's finickier but after changing the duty cycle a bit, it reproduces
> w/ HT off. I think the trick is setting the number of threads to the
> number of logical CPUs and tune -s/-c so that p99 starts climbing up.
> The following is from the root cgroup.
Since it's only measuring wakeup latency, schbench is best at exposing
problems when the machine is just barely below saturated. At
saturation, everyone has to wait for the CPUs, and if we're relatively
idle there's always a CPU to be found
There's schbench -a to try and find this magic tipping point, but I
haven't found a great way to automate for every kind of machine yet (sorry).
-chris
Powered by blists - more mailing lists