lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1a384b0d-0548-4340-a664-dffe8fe4cbdc@linux.ibm.com>
Date: Thu, 9 Jan 2025 19:59:10 +0530
From: Donet Tom <donettom@...ux.ibm.com>
To: David Hildenbrand <david@...hat.com>,
        Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Cc: Ritesh Harjani <ritesh.list@...il.com>,
        Baolin Wang <baolin.wang@...ux.alibaba.com>,
        "Aneesh Kumar K . V" <aneesh.kumar@...nel.org>,
        Matthew Wilcox <willy@...radead.org>, Zi Yan <ziy@...dia.com>,
        Muchun Song <muchun.song@...ux.dev>
Subject: Re: [PATCH] mm/memory.c: Add return NUMA_NO_NODE in
 numa_migrate_check() when folio_nid() and numa_node_id() are the same.


On 1/9/25 18:43, David Hildenbrand wrote:
> On 09.01.25 07:46, Donet Tom wrote:
>> If the folio_nid() and numa_node_id() are the same, it indicates
>> that the folio is already on the same node as the process. In
>> this case, there's no need to migrate the pages.
>>
>> This patch adds return NUMA_NO_NODE in numa_migrate_check() when
>> the folio_nid() and numa_node_id() match, preventing the function
>> from executing the remaining code unnecessarily.
>>
>> Signed-off-by: Donet Tom <donettom@...ux.ibm.com>
>> ---
>>   mm/memory.c | 1 +
>>   1 file changed, 1 insertion(+)
>>
>> diff --git a/mm/memory.c b/mm/memory.c
>> index 398c031be9ba..dfd89ff7f639 100644
>> --- a/mm/memory.c
>> +++ b/mm/memory.c
>> @@ -5509,6 +5509,7 @@ int numa_migrate_check(struct folio *folio, 
>> struct vm_fault *vmf,
>>       if (folio_nid(folio) == numa_node_id()) {
>>           count_vm_numa_event(NUMA_HINT_FAULTS_LOCAL);
>>           *flags |= TNF_FAULT_LOCAL;
>> +        return NUMA_NO_NODE;
>
> Doesn't this just mean that it is a local fault, but not necessarily 
> that we don't want to migrate that folio?
>
> mpol_misplaced states: "check whether current folio node is valid in 
> policy"
>
> Could we have a different policy set that does not indicate the local 
> node as the target node?
>
> Note how mpol_misplaced() obtains the target node to the do
>
>
> int curnid = folio_nid(folio);
> ...
> int polnid = NUMA_NO_NODE;
> int ret = NUMA_NO_NODE
>
> ... detect polnid
>
> if (curnid != polnid)
>     ret = polnid;
> ...
> return ret;
>
>
> So mpol_misplaced() will return "NUMA_NO_NODE" if already on the 
> correct target node.

Thank you, David. I understood my patch is wrong.

I have a small question: Page access latency is lower when the folio is 
on the same NUMA
node as the process. However, if the policy node is set to a different 
NUMA node and the
MPOL_F_MORON flag is not set, we migrate the page to the policy node, 
thereby increasing
access latency.  Could this have an impact on performance? What benefits 
do we gain from this?


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ