lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7367bb73-3358-4925-ac9d-e2b90904d15a@linux.alibaba.com>
Date: Fri, 28 Feb 2025 11:39:21 +0800
From: Baolin Wang <baolin.wang@...ux.alibaba.com>
To: Zi Yan <ziy@...dia.com>, Liu Shixin <liushixin2@...wei.com>
Cc: linux-mm@...ck.org,
 Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
 Barry Song <baohua@...nel.org>, David Hildenbrand <david@...hat.com>,
 Hugh Dickins <hughd@...gle.com>, Kefeng Wang <wangkefeng.wang@...wei.com>,
 Lance Yang <ioworker0@...il.com>, Matthew Wilcox <willy@...radead.org>,
 Ryan Roberts <ryan.roberts@....com>,
 Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: Softlockup when test shmem swapout-swapin and compaction



On 2025/2/28 07:43, Zi Yan wrote:
> On 27 Feb 2025, at 2:04, Liu Shixin wrote:
> 
>> On 2025/2/26 15:22, Baolin Wang wrote:
>>> Add Zi.
>>>
>>> On 2025/2/26 15:03, Liu Shixin wrote:
>>>> Hi all,
>>>>
>>>> I found a softlockup when testing shmem large folio swapout-swapin and compaction:
>>>>
>>>>    watchdog: BUG: soft lockup - CPU#30 stuck for 179s! [folio_swap:4714]
>>>>    Modules linked in: zram xt_MASQUERADE nf_conntrack_netlink nfnetlink iptable_nat xt_addrtype iptable_filter ip_tantel_rapl_msr intel_rapl_common intel_uncore_frequency_common skx_edac_common nfit libnvdimm kvm_intel kvm rapl cixt4 mbcache jbd2 sr_mod cdrom ata_generic ata_piix virtio_net net_failover ghash_clmulni_intel libata sha512_ssse3
>>>>    CPU: 30 UID: 0 PID: 4714 Comm: folio_swap Kdump: loaded Tainted: G             L     6.14.0-rc4-next-20250225+ #2
>>>>    Tainted: [L]=SOFTLOCKUP
>>>>    Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.13.0-1ubuntu1.1 04/01/2014
>>>>    RIP: 0010:xas_load+0x5d/0xc0
>>>>    Code: 08 48 d3 ea 83 e2 3f 89 d0 48 83 c0 04 48 8b 44 c6 08 48 89 73 18 48 89 c1 83 e1 03 48 83 f9 02 75 08 48 3d
>>>>    RSP: 0000:ffffadf142f1ba60 EFLAGS: 00000293
>>>>    RAX: ffffe524cc4f6700 RBX: ffffadf142f1ba90 RCX: 0000000000000000
>>>>    RDX: 0000000000000011 RSI: ffff9a3e058acb68 RDI: ffffadf142f1ba90
>>>>    RBP: fffffffffffffffe R08: ffffadf142f1bb50 R09: 0000000000000392
>>>>    R10: 0000000000000001 R11: 0000000000000000 R12: 0000000000000011
>>>>    R13: ffffadf142f1bb48 R14: ffff9a3e04e9c588 R15: 0000000000000000
>>>>    FS:  00007fd957666740(0000) GS:ffff9a41ac0e5000(0000) knlGS:0000000000000000
>>>>    CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
>>>>    CR2: 00007fd922860000 CR3: 000000025c360001 CR4: 0000000000772ef0
>>>>    DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
>>>>    DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
>>>>    PKRU: 55555554
>>>>    Call Trace:
>>>>     <IRQ>
>>>>     ? watchdog_timer_fn+0x1c9/0x250
>>>>     ? __pfx_watchdog_timer_fn+0x10/0x10
>>>>     ? __hrtimer_run_queues+0x10e/0x250
>>>>     ? hrtimer_interrupt+0xfb/0x240
>>>>     ? __sysvec_apic_timer_interrupt+0x4e/0xe0
>>>>     ? sysvec_apic_timer_interrupt+0x68/0x90
>>>>     </IRQ>
>>>>     <TASK>
>>>>     ? asm_sysvec_apic_timer_interrupt+0x16/0x20
>>>>     ? xas_load+0x5d/0xc0
>>>>     xas_find+0x153/0x1a0
>>>>     find_get_entries+0x73/0x280
>>>>     shmem_undo_range+0x1fc/0x640
>>>>     shmem_evict_inode+0x109/0x270
>>>>     evict+0x107/0x240
>>>>     ? fsnotify_destroy_marks+0x25/0x180
>>>>     ? _atomic_dec_and_lock+0x35/0x50
>>>>     __dentry_kill+0x71/0x190
>>>>     dput+0xd1/0x190
>>>>     __fput+0x128/0x2a0
>>>>     task_work_run+0x57/0x90
>>>>     syscall_exit_to_user_mode+0x1cb/0x1e0
>>>>     do_syscall_64+0x67/0x170
>>>>     entry_SYSCALL_64_after_hwframe+0x76/0x7e
>>>>    RIP: 0033:0x7fd95776eb8b
>>>>
>>>> If CONFIG_DEBUG_VM is enabled, we will meet VM_BUG_ON_FOLIO(!folio_test_locked(folio)) in
>>>> shmem_add_to_page_cache() too.  It seems that the problem is related to memory migration or
>>>> compaction which is necessary for reproduction,  although without a clear why.
>>>>
>>>> To reproduce the problem, we need firstly a zram device as swap backend, and then run the
>>>> reproduction program. The reproduction program consists of three parts:
>>>>    1. A process constantly changes the status of shmem large folio by these interfaces:
>>>>           /sys/kernel/mm/transparent_hugepage/hugepages-<size>/shmem_enabled
>>>>    2. A process constantly echo 1 > /proc/sys/vm/compact_memory
>>>>    3. A process constantly alloc/free/swapout/swapin shmem large folios.
>>>>
>>>> I'm not sure whether the first process is necessary but the second and third are. In addition,
>>>> I tried hacking to modify compaction_alloc to return NULL, and the problem disappeared,
>>>> so I guess the problem is in migration.
>>>>
>>>> The problem is different with https://lore.kernel.org/all/1738717785.im3r5g2vxc.none@localhost/
>>>> since I have confirmed this porblem still existed after merge the fixed patch.
>>>
>>> Could you check if your version includes Zi's fix[1]? Not sure if it's related to the shmem large folio split.
>>>
>>> [1] https://lore.kernel.org/all/AF487A7A-F685-485D-8D74-756C843D6F0A@nvidia.com/
>>> .
>>>
>> Already include this patch when test.
> 
> Hi Shixin,
> 
> Can you try the diff below? It fixed my local repro.
> 
> The issue is that after Baolin’s patch, shmem folios now use high-order
> entry, so the migration code should not update multiple xarray slots.

It is not after my patches. After converting shmem to use folio, shmem 
mapping will store large order, but during swap, the shmem large folio 
will be split (whereas my patches allow shmem large folio swap without 
splitting).

> Hi Baolin,
> 
> Is your patch affecting anonymous swapping out? If yes, we can remove

No.

> the for loop of updating xarray in __folio_migrate_mapping().

I think the issue is introduced by commit fc346d0a70a1 ("mm: migrate 
high-order folios in swap cache correctly"), which did not handle shmem 
folio correctly.

> diff --git a/mm/migrate.c b/mm/migrate.c
> index 365c6daa8d1b..be77932596b3 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -44,6 +44,7 @@
>   #include <linux/sched/sysctl.h>
>   #include <linux/memory-tiers.h>
>   #include <linux/pagewalk.h>
> +#include <linux/shmem_fs.h>
> 
>   #include <asm/tlbflush.h>
> 
> @@ -524,7 +525,11 @@ static int __folio_migrate_mapping(struct address_space *mapping,
>   			folio_set_swapcache(newfolio);
>   			newfolio->private = folio_get_private(folio);
>   		}
> -		entries = nr;
> +		/* shmem now uses high-order entry */
> +		if (folio->mapping && shmem_mapping(folio->mapping))

Nit: we've already checked the 'mapping', and we can simplify it to 
'shmem_mapping(mapping)'.

> +			entries = 1;
> +		else
> +			entries = nr;
>   	} else {
>   		VM_BUG_ON_FOLIO(folio_test_swapcache(folio), folio);
>   		entries = 1;

Good catch. The fix look good to me. Thanks.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ