[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9f95a85f-5396-b8bd-50cf-c4eeeac2a013@amd.com>
Date: Tue, 1 Feb 2022 17:52:55 +0530
From: Bharata B Rao <bharata@....com>
To: Mel Gorman <mgorman@...e.de>
Cc: linux-kernel@...r.kernel.org, mingo@...hat.com,
peterz@...radead.org, juri.lelli@...hat.com,
vincent.guittot@...aro.org, dietmar.eggemann@....com,
rostedt@...dmis.org, bsegall@...gle.com, bristot@...hat.com,
dishaa.talreja@....com, Wei Huang <wei.huang2@....com>
Subject: Re: [RFC PATCH v0 1/3] sched/numa: Process based autonuma scan period
framework
Thanks Mel for taking time to look at the patchset and providing your valuable review comments.
On 1/31/2022 5:47 PM, Mel Gorman wrote:
> On Fri, Jan 28, 2022 at 10:58:49AM +0530, Bharata B Rao wrote:
>> From: Disha Talreja <dishaa.talreja@....com>
>>
>> Add a new framework that calculates autonuma scan period
>> based on per-process NUMA fault stats.
>>
>> NUMA faults can be classified into different categories, such
>> as local vs. remote, or private vs. shared. It is also important
>> to understand such behavior from the perspective of a process.
>> The per-process fault stats added here will be used for
>> calculating the scan period in the adaptive NUMA algorithm.
>>
>
> Be more specific no how the local vs remote, private vs shared states
> are reflections of per-task activity of the same.
Sure, will document the algorithm better. However the overall thinking
here is that the address-space scanning is a per-process activity and
hence the scan period value derived from the accumulated per-process
faults is more appropriate than calculating per-task (per-thread) scan
periods. Participating threads may have their local/shared and private/shared
behaviors, but when aggregated at the process level, it gives a better
input for eventual scan period variation. The understanding is that individual
thread fault rates will start altering the overall process metrics in
such a manner that we respond by changing the scan rate to do more aggressive
or less aggressive scanning.
>
>> The actual scan period is still using the original value
>> p->numa_scan_period before the real implementation is added in
>> place in a later commit.
>>
>> Co-developed-by: Wei Huang <wei.huang2@....com>
>> Signed-off-by: Wei Huang <wei.huang2@....com>
>> Signed-off-by: Disha Talreja <dishaa.talreja@....com>
>> Signed-off-by: Bharata B Rao <bharata@....com>
>> ---
>> include/linux/mm_types.h | 7 +++++++
>> kernel/sched/fair.c | 40 ++++++++++++++++++++++++++++++++++++++--
>> 2 files changed, 45 insertions(+), 2 deletions(-)
>>
>> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
>> index 9db36dc5d4cf..4f978c09d3db 100644
>> --- a/include/linux/mm_types.h
>> +++ b/include/linux/mm_types.h
>> @@ -610,6 +610,13 @@ struct mm_struct {
>>
>> /* numa_scan_seq prevents two threads setting pte_numa */
>> int numa_scan_seq;
>> +
>> + /* Process-based Adaptive NUMA */
>> + atomic_long_t faults_locality[2];
>> + atomic_long_t faults_shared[2];
>> +
>> + spinlock_t pan_numa_lock;
>
> Document what this lock protects. In the context of this patch it appears
> to protect a read of p->numa_scan_period and it's overkill to use a
> spinlock for that. Also, given that it's a trylock, the task_numa_work
> ends up doing no scanning or updates. This might have some value in
> terms of avoiding multiple threads doing updates if they happen to start
> at the same time but that's a narrow side-effect given the short hold
> time of the lock.
Sure, I put it in the code comment, but will document the usage here.
If the try_lock fails, it means some other thread is updating the stat and
most likely that thread will go ahead with the atomic update to mm->numa_next_scan
and start the scanning. So can't see how this will stall scanning or stat updates
in general. Please note that in the existing scheme, the stats aggregation
happens at fault time but in PAN it happens in task work context.
>
>> + unsigned int numa_scan_period;
>
> Document how the per-mm numa_scan_period is related to the per-task
> numa_scan_period.
They aren't related, per-mm numa_scan_period is in fact a replacement of
per-task numa_scan_period. However numa_migrate_retry interval still depends
on per-task period as you noted elsewhere. I think we could replace that usage
too with per-mm numa_scan_period and completely remove the per-task version.
Regards,
Bharata.
Powered by blists - more mailing lists