lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f9ca3b97-002d-46b0-904b-c9b1859ee236@linux.alibaba.com>
Date: Mon, 19 Feb 2024 17:54:36 +0800
From: Baolin Wang <baolin.wang@...ux.alibaba.com>
To: Oscar Salvador <osalvador@...e.de>
Cc: Byungchul Park <byungchul@...com>, akpm@...ux-foundation.org,
 ying.huang@...el.com, hannes@...xchg.org, linux-kernel@...r.kernel.org,
 linux-mm@...ck.org, kernel_team@...ynix.com, stable@...r.kernel.org
Subject: Re: [PATCH] mm/vmscan: Fix a bug calling wakeup_kswapd() with a wrong
 zone index



On 2024/2/19 16:11, Oscar Salvador wrote:
> On Mon, Feb 19, 2024 at 02:25:11PM +0800, Baolin Wang wrote:
>> This means that there is no memory on the target node? if so, we can add a
>> check at the beginning to avoid calling unnecessary
>> migrate_misplaced_folio().
>>
>> diff --git a/mm/memory.c b/mm/memory.c
>> index e95503d7544e..a64a1aac463f 100644
>> --- a/mm/memory.c
>> +++ b/mm/memory.c
>> @@ -5182,7 +5182,7 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf)
>>          else
>>                  last_cpupid = folio_last_cpupid(folio);
>>          target_nid = numa_migrate_prep(folio, vma, vmf->address, nid,
>> &flags);
>> -       if (target_nid == NUMA_NO_NODE) {
>> +       if (target_nid == NUMA_NO_NODE || !node_state(target_nid, N_MEMORY))
>> {
>>                  folio_put(folio);
>>                  goto out_map;
>>          }
>>
>> (similar changes for do_huge_pmd_numa_page())
> 
> With the check in place from [1], numa_migrate_prep() will also return
> NUMA_NO_NODE, so no need for this one here.
> 
> And I did not check, but I assume that do_huge_pmd_numa_page() also ends
> up calling numa_migrate_prep().
> 
> [1] https://lore.kernel.org/lkml/20240219041920.1183-1-byungchul@sk.com/
Right. I missed this patch before. So with checking in 
should_numa_migrate_memory(), I guess current changes in 
numamigrate_isolate_folio() can also be dropped, it will never hit a 
memoryless node after the patch [1], no?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ