[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <558456DB.3040108@redhat.com>
Date: Fri, 19 Jun 2015 13:52:27 -0400
From: Rik van Riel <riel@...hat.com>
To: Srikar Dronamraju <srikar@...ux.vnet.ibm.com>
CC: linux-kernel@...r.kernel.org, peterz@...radead.org,
mingo@...nel.org, mgorman@...e.de
Subject: Re: [PATCH] sched,numa: document and fix numa_preferred_nid setting
On 06/19/2015 01:16 PM, Srikar Dronamraju wrote:
>>
>> OK, so we are looking at two multi-threaded processes
>> on a 4 node system, and waiting for them to converge?
>>
>> It may make sense to add my patch in with your patch
>> 1/4 from last week, as well as the correct part of
>> your patch 4/4, and see how they all work together.
>>
>
> Tested specjbb and autonumabenchmark on 4 kernels.
>
> Plain 4.1.0-rc7-tip (i)
> tip + only Rik's patch (ii)
> tip + Rik's ++ (iii)
> tip + Srikar's ++ (iv)
> 5 interations of Specjbb on 4 node, 24 core powerpc machine.
> Ran 1 instance per system.
Would you happen to have 2 instance and 4 instance SPECjbb
numbers, too? The single instance numbers seem to be within
the margin of error, but I would expect multi-instance numbers
to show more dramatic changes, due to changes in how workloads
converge...
Those behave very differently from single instance, especially
with the "always set the preferred_nid, even if we moved the
task to a node we do NOT prefer" patch...
It would be good to understand the behaviour of these patches
under more circumstances.
--
All rights reversed
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists