[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <559D4128.2080606@redhat.com>
Date: Wed, 08 Jul 2015 11:26:32 -0400
From: Rik van Riel <riel@...hat.com>
To: Ingo Molnar <mingo@...nel.org>,
Srikar Dronamraju <srikar@...ux.vnet.ibm.com>
CC: Peter Zijlstra <peterz@...radead.org>,
linux-kernel@...r.kernel.org, Mel Gorman <mgorman@...e.de>
Subject: Re: [PATCH] sched/numa: Restore sched feature NUMA to its earlier
avatar.
On 07/08/2015 09:56 AM, Ingo Molnar wrote:
>
> * Srikar Dronamraju <srikar@...ux.vnet.ibm.com> wrote:
>
>> In commit:8a9e62a "sched/numa: Prefer NUMA hotness over cache hotness"
>> sched feature NUMA was always set to true. However this sched feature was
>> suppose to be enabled on NUMA boxes only thro set_numabalancing_state().
>>
>> To get back to the above behaviour, bring back NUMA_FAVOUR_HIGHER feature.
>
> Three typos and a non-standard commit ID reference.
>
>> /*
>> + * NUMA_FAVOUR_HIGHER will favor moving tasks towards nodes where a
>> + * higher number of hinting faults are recorded during active load
>> + * balancing. It will resist moving tasks towards nodes where a lower
>> + * number of hinting faults have been recorded.
>> */
>> -SCHED_FEAT(NUMA, true)
>> +SCHED_FEAT(NUMA_FAVOUR_HIGHER, true)
>> #endif
>>
>
> So the comment spells 'favor' American, the constant you introduce is British
> spelling via 'FAVOUR'? Please use it consistently!
>
> Also, this name is totally non-intuitive.
>
> Make it something like NUMA_FAVOR_BUSY_NODES or so?
It is not about relocating tasks to busier nodes. The scheduler still
moves tasks from busier nodes to idler nodes.
This code makes the scheduler more prone to move tasks from nodes where
they have fewer NUMA faults, to nodes where they have more.
Not sure what a good name would be to describe that...
--
All rights reversed
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists