lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9e59f1f0-db3b-2182-4485-887ac7036bfd@huaweicloud.com>
Date: Tue, 10 Jun 2025 09:02:01 +0800
From: Kemeng Shi <shikemeng@...weicloud.com>
To: Baolin Wang <baolin.wang@...ux.alibaba.com>, hughd@...gle.com,
 willy@...radead.org, akpm@...ux-foundation.org
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
 linux-fsdevel@...r.kernel.org
Subject: Re: [PATCH 1/7] mm: shmem: correctly pass alloced parameter to
 shmem_recalc_inode() to avoid WARN_ON()



on 6/9/2025 8:46 AM, Kemeng Shi wrote:
> 
> 
> on 6/7/2025 2:11 PM, Baolin Wang wrote:
>>
>>
>> On 2025/6/6 06:10, Kemeng Shi wrote:
>>> As noted in the comments, we need to release block usage for swap entry
>>> which was replaced with poisoned swap entry. However, no block usage is
>>> actually freed by calling shmem_recalc_inode(inode, -nr_pages, -nr_pages).
>>> Instead, call shmem_recalc_inode(inode, 0, -nr_pages) can correctly release
>>> the block usage.
>>>
>>> Fixes: 6cec2b95dadf7 ("mm/shmem: fix infinite loop when swap in shmem error at swapoff time")
>>> Signed-off-by: Kemeng Shi <shikemeng@...weicloud.com>
>>> ---
>>>   mm/shmem.c | 2 +-
>>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/mm/shmem.c b/mm/shmem.c
>>> index 4b42419ce6b2..e27d19867e03 100644
>>> --- a/mm/shmem.c
>>> +++ b/mm/shmem.c
>>> @@ -2145,7 +2145,7 @@ static void shmem_set_folio_swapin_error(struct inode *inode, pgoff_t index,
>>>        * won't be 0 when inode is released and thus trigger WARN_ON(i_blocks)
>>>        * in shmem_evict_inode().
>>>        */
>>> -    shmem_recalc_inode(inode, -nr_pages, -nr_pages);
>>> +    shmem_recalc_inode(inode, 0, -nr_pages);
>>>       swap_free_nr(swap, nr_pages);
>>>   }
>>
>> Have you tested your patch? When I inject an error to test your patch, the following issue will be triggered:As all issues are hard to trigger, I only run some simple test to ensure normal
> process is fine. Could you share how to inject the error to trigger following
> issue. I will have a deep look. Thanks
Sorry that the message is truncated. I mean I only test normal process is fine.
Besides, I think there is another long-standing issue which could trigger the
following issue. Here is the issue which is possible to blame:
When swap entry is replaced with error entry in shmem_set_folio_swapin_error(),
we will reduce info->swapped. Afterwards, error entry could be deleted in
shmem_undo_range() and the info->swapped is reduced again. As a result, we
reduce info->swapped twice for a single swap entry.
A simple way to confirm this is injecting error to original code. Could you
share how to trigger the issue or could you do the same test to original code?
Thanks.

>>
>> [  127.173330] ------------[ cut here ]------------
>> [  127.173331] WARNING: CPU: 13 PID: 6860 at mm/shmem.c:1388 shmem_evict_inode+0xf0/0x348
>> [  127.173920] CPU: 13 UID: 0 PID: 6860 Comm: shmem_swapin_er Kdump: loaded Tainted: G            E       6.15.0-rc6+ #54 VOLUNTARY
>> [  127.173925] pstate: 63401005 (nZCv daif +PAN -UAO +TCO +DIT +SSBS BTYPE=--)
>> [  127.173927] pc : shmem_evict_inode+0xf0/0x348
>> [  127.173929] lr : shmem_evict_inode+0x68/0x348
>> [  127.173931] sp : ffff8000895639e0
>> [  127.173932] x29: ffff8000895639e0 x28: 0000000000000006 x27: ffff00013754bfc0
>> [  127.173935] x26: ffff800080d8f160 x25: 0000000000000006 x24: ffff0000c0aab440
>> [  127.173937] x23: ffff00013754b780 x22: ffff00013754b780 x21: ffff0000cbc9c6b0
>> [  127.173940] x20: ffff0000c0aab440 x19: ffff0000cbc9c700 x18: 0000000000000030
>> [  127.173942] x17: 0000ffffa1f4cfff x16: 0000000000000003 x15: 0000000000001000
>> [  127.173945] x14: 00000000ffffffff x13: 0000000000000004 x12: ffff800089563108
>> [  127.173947] x11: 0000000000000000 x10: 0000000000000002 x9 : ffff800080352080
>> [  127.173949] x8 : fffffffffffffffe x7 : ffff800089563700 x6 : 0000000000000001
>> [  127.173952] x5 : 0000000000000004 x4 : 0000000000000002 x3 : 0000000000000002
>> [  127.173954] x2 : 0000000000000000 x1 : 0000000000000000 x0 : ffffffffffffff80
>> [  127.173957] Call trace:
>> [  127.173958]  shmem_evict_inode+0xf0/0x348 (P)
>> [  127.173961]  evict+0x1c8/0x2c8
>> [  127.173964]  iput_final+0x84/0x1a0
>> [  127.173966]  iput.part.0+0xd0/0xf0
>> [  127.173968]  iput+0x20/0x38
>> [  127.173971]  dentry_unlink_inode+0xc0/0x158
>> [  127.173973]  __dentry_kill+0x80/0x248
>> [  127.173974]  dput+0xf0/0x240
>> [  127.173976]  __fput+0x120/0x2f0
>> [  127.173978]  ____fput+0x18/0x28
>> [  127.173980]  task_work_run+0x88/0x120
>> [  127.173983]  do_exit+0x198/0x3c0
>> [  127.173986]  do_group_exit+0x38/0xa0
>> [  127.173987]  get_signal+0x6ac/0x6b8
>> [  127.173990]  do_signal+0x100/0x208
>> [  127.173991]  do_notify_resume+0xc8/0x158
>> [  127.173994]  el0_da+0xbc/0xc0
>> [  127.173997]  el0t_64_sync_handler+0x70/0xc8
>> [  127.173999]  el0t_64_sync+0x154/0x158
>> [  127.174001] ---[ end trace 0000000000000000 ]---
>>


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ