lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date: Tue, 14 May 2024 11:07:18 +0800
From: Miaohe Lin <linmiaohe@...wei.com>
To: David Hildenbrand <david@...hat.com>, <akpm@...ux-foundation.org>
CC: <shy828301@...il.com>, <nao.horiguchi@...il.com>,
	<xuyu@...ux.alibaba.com>, <linux-mm@...ck.org>,
	<linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] mm/huge_memory: mark huge_zero_folio reserved

On 2024/5/13 23:40, David Hildenbrand wrote:
> On 11.05.24 05:28, Miaohe Lin wrote:
>> When I did memory failure tests recently, below panic occurs:
>>
>>   kernel BUG at include/linux/mm.h:1135!
>>   invalid opcode: 0000 [#1] PREEMPT SMP NOPTI
>>   CPU: 9 PID: 137 Comm: kswapd1 Not tainted 6.9.0-rc4-00491-gd5ce28f156fe-dirty #14
>>   RIP: 0010:shrink_huge_zero_page_scan+0x168/0x1a0
>>   RSP: 0018:ffff9933c6c57bd0 EFLAGS: 00000246
>>   RAX: 000000000000003e RBX: 0000000000000000 RCX: ffff88f61fc5c9c8
>>   RDX: 0000000000000000 RSI: 0000000000000027 RDI: ffff88f61fc5c9c0
>>   RBP: ffffcd7c446b0000 R08: ffffffff9a9405f0 R09: 0000000000005492
>>   R10: 00000000000030ea R11: ffffffff9a9405f0 R12: 0000000000000000
>>   R13: 0000000000000000 R14: 0000000000000000 R15: ffff88e703c4ac00
>>   FS:  0000000000000000(0000) GS:ffff88f61fc40000(0000) knlGS:0000000000000000
>>   CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
>>   CR2: 000055f4da6e9878 CR3: 0000000c71048000 CR4: 00000000000006f0
>>   Call Trace:
>>    <TASK>
>>    do_shrink_slab+0x14f/0x6a0
>>    shrink_slab+0xca/0x8c0
>>    shrink_node+0x2d0/0x7d0
>>    balance_pgdat+0x33a/0x720
>>    kswapd+0x1f3/0x410
>>    kthread+0xd5/0x100
>>    ret_from_fork+0x2f/0x50
>>    ret_from_fork_asm+0x1a/0x30
>>    </TASK>
>>   Modules linked in: mce_inject hwpoison_inject
>>   ---[ end trace 0000000000000000 ]---
>>   RIP: 0010:shrink_huge_zero_page_scan+0x168/0x1a0
>>   RSP: 0018:ffff9933c6c57bd0 EFLAGS: 00000246
>>   RAX: 000000000000003e RBX: 0000000000000000 RCX: ffff88f61fc5c9c8
>>   RDX: 0000000000000000 RSI: 0000000000000027 RDI: ffff88f61fc5c9c0
>>   RBP: ffffcd7c446b0000 R08: ffffffff9a9405f0 R09: 0000000000005492
>>   R10: 00000000000030ea R11: ffffffff9a9405f0 R12: 0000000000000000
>>   R13: 0000000000000000 R14: 0000000000000000 R15: ffff88e703c4ac00
>>   FS:  0000000000000000(0000) GS:ffff88f61fc40000(0000) knlGS:0000000000000000
>>   CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
>>   CR2: 000055f4da6e9878 CR3: 0000000c71048000 CR4: 00000000000006f0
>>
>> The root cause is that HWPoison flag will be set for huge_zero_folio
>> without increasing the folio refcnt. But then unpoison_memory() will
>> decrease the folio refcnt unexpectly as it appears like a successfully
>> hwpoisoned folio leading to VM_BUG_ON_PAGE(page_ref_count(page) == 0)
>> when releasing huge_zero_folio.
>>
>> Fix this issue by marking huge_zero_folio reserved. So unpoison_memory()
>> will skip this page. This will make it consistent with ZERO_PAGE case too.
>>
>> Fixes: 478d134e9506 ("mm/huge_memory: do not overkill when splitting huge_zero_page")
>> Signed-off-by: Miaohe Lin <linmiaohe@...wei.com>
>> Cc: <stable@...r.kernel.org>
>> ---
>>   mm/huge_memory.c | 2 ++
>>   1 file changed, 2 insertions(+)
>>
>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>> index 317de2afd371..d508ff793145 100644
>> --- a/mm/huge_memory.c
>> +++ b/mm/huge_memory.c
>> @@ -212,6 +212,7 @@ static bool get_huge_zero_page(void)
>>           folio_put(zero_folio);
>>           goto retry;
>>       }
>> +    __folio_set_reserved(zero_folio);
> 
> We want to limit/remove the use of PG_reserve. Please find a different way (e.g., simply checking for the huge zero page directly).

I see. Will drop this patch and find another one.
Thanks.
.



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ