[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <acecfe24-2e31-f560-91bc-93e7e08c109c@fb.com>
Date: Fri, 9 Jun 2017 13:52:06 -0400
From: Chris Mason <clm@...com>
To: Peter Zijlstra <peterz@...radead.org>,
Matt Fleming <matt@...eblueprint.co.uk>
CC: <mingo@...nel.org>, <tglx@...utronix.de>, <riel@...hat.com>,
<hpa@...or.com>, <efault@....de>, <linux-kernel@...r.kernel.org>,
<torvalds@...ux-foundation.org>, <lvenanci@...hat.com>,
<xiaolong.ye@...el.com>, <kitsunyan@...ox.ru>
Subject: Re: hackbench vs select_idle_sibling; was: [tip:sched/core]
sched/fair, cpumask: Export for_each_cpu_wrap()
On 06/06/2017 05:21 AM, Peter Zijlstra wrote:
> On Mon, Jun 05, 2017 at 02:00:21PM +0100, Matt Fleming wrote:
>> On Fri, 19 May, at 04:00:35PM, Matt Fleming wrote:
>>> On Wed, 17 May, at 12:53:50PM, Peter Zijlstra wrote:
>>>>
>>>> Please test..
>>>
>>> Results are still coming in but things do look better with your patch
>>> applied.
>>>
>>> It does look like there's a regression when running hackbench in
>>> process mode and when the CPUs are not fully utilised, e.g. check this
>>> out:
>>
>> This turned out to be a false positive; your patch improves things as
>> far as I can see.
>
> Hooray, I'll move it to a part of the queue intended for merging.
It's a little late, but Roman Gushchin helped get some runs of this with
our production workload. The patch is every so slightly better.
Thanks!
-chris
Powered by blists - more mailing lists