[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1738717785.im3r5g2vxc.none@localhost>
Date: Tue, 04 Feb 2025 20:23:47 -0500
From: "Alex Xu (Hello71)" <alex_y_xu@...oo.ca>
To: linux-mm@...ck.org, Baolin Wang <baolin.wang@...ux.alibaba.com>,
Daniel Gomez <da.gomez@...sung.com>
Cc: Barry Song <baohua@...nel.org>, David Hildenbrand <david@...hat.com>,
Hugh Dickins <hughd@...gle.com>, Kefeng Wang <wangkefeng.wang@...wei.com>,
Lance Yang <ioworker0@...il.com>, Matthew Wilcox <willy@...radead.org>,
Ryan Roberts <ryan.roberts@....com>, linux-kernel@...r.kernel.org,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Hang when swapping huge=within_size tmpfs from zram
Hi all,
On 6.14-rc1, I found that creating a lot of files in tmpfs then deleting
them reliably hangs when tmpfs is mounted with huge=within_size, and it
is swapped out to zram (zstd/zsmalloc/no backing dev). I bisected this
to acd7ccb284b "mm: shmem: add large folio support for tmpfs".
When the issue occurs, rm uses 100% CPU, cannot be killed, and has no
output in /proc/pid/stack or wchan. Eventually, an RCU stall is
detected:
rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
rcu: Tasks blocked on level-0 rcu_node (CPUs 0-11): P25160
rcu: (detected by 10, t=2102 jiffies, g=532677, q=4997 ncpus=12)
task:rm state:R running task stack:0 pid:25160 tgid:25160 ppid:24309 task_flags:0x400000 flags:0x00004004
Call Trace:
<TASK>
? __schedule+0x388/0x1000
? kmem_cache_free.part.0+0x23d/0x280
? sysvec_apic_timer_interrupt+0xa/0x80
? asm_sysvec_apic_timer_interrupt+0x16/0x20
? xas_load+0x12/0xc0
? xas_load+0x8/0xc0
? xas_find+0x144/0x190
? find_lock_entries+0x75/0x260
? shmem_undo_range+0xe6/0x5f0
? shmem_evict_inode+0xe4/0x230
? mtree_erase+0x7e/0xe0
? inode_set_ctime_current+0x2e/0x1f0
? evict+0xe9/0x260
? _atomic_dec_and_lock+0x31/0x50
? do_unlinkat+0x270/0x2b0
? __x64_sys_unlinkat+0x30/0x50
? do_syscall_64+0x37/0xe0
? entry_SYSCALL_64_after_hwframe+0x50/0x58
</TASK>
Let me know what information is needed to further troubleshoot this
issue.
Thanks,
Alex.
Powered by blists - more mailing lists