[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <41170067-dcb9-4aa1-a5fe-0cbee6af02df-agordeev@linux.ibm.com>
Date: Wed, 29 Oct 2025 07:36:00 +0100
From: Alexander Gordeev <agordeev@...ux.ibm.com>
To: Luiz Capitulino <luizcap@...hat.com>
Cc: hca@...ux.ibm.com, borntraeger@...ux.ibm.com, joao.m.martins@...cle.com,
mike.kravetz@...cle.com, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, linux-s390@...r.kernel.org, gor@...ux.ibm.com,
gerald.schaefer@...ux.ibm.com, osalvador@...e.de,
akpm@...ux-foundation.org, david@...hat.com, aneesh.kumar@...nel.org
Subject: Re: [PATCH v2] s390: fix HugeTLB vmemmap optimization crash
On Tue, Oct 28, 2025 at 05:15:33PM -0400, Luiz Capitulino wrote:
> A reproducible crash occurs when enabling HugeTLB vmemmap optimization (HVO)
> on s390. The crash and the proposed fix were worked on an s390 KVM guest
> running on an older hypervisor, as I don't have access to an LPAR. However,
> the same issue should occur on bare-metal.
>
> Reproducer (it may take a few runs to trigger):
>
> # sysctl vm.hugetlb_optimize_vmemmap=1
> # echo 1 > /proc/sys/vm/nr_hugepages
> # echo 0 > /proc/sys/vm/nr_hugepages
>
> Crash log:
>
> [ 52.340369] list_del corruption. prev->next should be 000000d382110008, but was 000000d7116d3880. (prev=000000d7116d3910)
> [ 52.340420] ------------[ cut here ]------------
> [ 52.340424] kernel BUG at lib/list_debug.c:62!
> [ 52.340566] monitor event: 0040 ilc:2 [#1]SMP
> [ 52.340573] Modules linked in: ctcm fsm qeth ccwgroup zfcp scsi_transport_fc qdio dasd_fba_mod dasd_eckd_mod dasd_mod xfs ghash_s390 prng des_s390 libdes sha3_512_s390 sha3_256_s390 virtio_net virtio_blk net_failover sha_common failover dm_mirror dm_region_hash dm_log dm_mod paes_s390 crypto_engine pkey_cca pkey_ep11 zcrypt pkey_pckmo pkey aes_s390
> [ 52.340606] CPU: 1 UID: 0 PID: 1672 Comm: root-rep2 Kdump: loaded Not tainted 6.18.0-rc3 #1 NONE
> [ 52.340610] Hardware name: IBM 3931 LA1 400 (KVM/Linux)
> [ 52.340611] Krnl PSW : 0704c00180000000 0000015710cda7fe (__list_del_entry_valid_or_report+0xfe/0x128)
> [ 52.340619] R:0 T:1 IO:1 EX:1 Key:0 M:1 W:0 P:0 AS:3 CC:0 PM:0 RI:0 EA:3
> [ 52.340622] Krnl GPRS: c0000000ffffefff 0000000100000027 000000000000006d 0000000000000000
> [ 52.340623] 000000d7116d35d8 000000d7116d35d0 0000000000000002 000000d7116d39b0
> [ 52.340625] 000000d7116d3880 000000d7116d3910 000000d7116d3910 000000d382110008
> [ 52.340626] 000003ffac1ccd08 000000d7116d39b0 0000015710cda7fa 000000d7116d37d0
> [ 52.340632] Krnl Code: 0000015710cda7ee: c020003e496f larl %r2,00000157114a3acc
> 0000015710cda7f4: c0e5ffd5280e brasl %r14,000001571077f810
> #0000015710cda7fa: af000000 mc 0,0
> >0000015710cda7fe: b9040029 lgr %r2,%r9
> 0000015710cda802: c0e5ffe5e193 brasl %r14,0000015710996b28
> 0000015710cda808: e34090080004 lg %r4,8(%r9)
> 0000015710cda80e: b9040059 lgr %r5,%r9
> 0000015710cda812: b9040038 lgr %r3,%r8
> [ 52.340643] Call Trace:
> [ 52.340645] [<0000015710cda7fe>] __list_del_entry_valid_or_report+0xfe/0x128
> [ 52.340649] ([<0000015710cda7fa>] __list_del_entry_valid_or_report+0xfa/0x128)
> [ 52.340652] [<0000015710a30b2e>] hugetlb_vmemmap_restore_folios+0x96/0x138
> [ 52.340655] [<0000015710a268ac>] update_and_free_pages_bulk+0x64/0x150
> [ 52.340659] [<0000015710a26f8a>] set_max_huge_pages+0x4ca/0x6f0
> [ 52.340662] [<0000015710a273ba>] hugetlb_sysctl_handler_common+0xea/0x120
> [ 52.340665] [<0000015710a27484>] hugetlb_sysctl_handler+0x44/0x50
> [ 52.340667] [<0000015710b53ffa>] proc_sys_call_handler+0x17a/0x280
> [ 52.340672] [<0000015710a90968>] vfs_write+0x2c8/0x3a0
> [ 52.340676] [<0000015710a90bd2>] ksys_write+0x72/0x100
> [ 52.340679] [<00000157111483a8>] __do_syscall+0x150/0x318
> [ 52.340682] [<0000015711153a5e>] system_call+0x6e/0x90
> [ 52.340684] Last Breaking-Event-Address:
> [ 52.340684] [<000001571077f85c>] _printk+0x4c/0x58
> [ 52.340690] Kernel panic - not syncing: Fatal exception: panic_on_oops
>
> This issue was introduced by commit f13b83fdd996 ("hugetlb: batch TLB
> flushes when freeing vmemmap"). Before that change, the HVO
> implementation called flush_tlb_kernel_range() each time a vmemmap
> PMD split and remapping was performed. The mentioned commit changed this
> to issue a few flush_tlb_all() calls after performing all remappings.
>
> However, on s390, flush_tlb_kernel_range() expands to
> __tlb_flush_kernel() while flush_tlb_all() is not implemented. As a
> result, we went from flushing the TLB for every remapping to no flushing
> at all.
>
> This commit fixes this by implementing flush_tlb_all() on s390 as an
> alias to __tlb_flush_global(). This should cause a flush on all TLB
> entries on all CPUs as expected by the flush_tlb_all() semantics.
>
> Fixes: f13b83fdd996 ("hugetlb: batch TLB flushes when freeing vmemmap")
> Signed-off-by: Luiz Capitulino <luizcap@...hat.com>
> ---
> arch/s390/include/asm/tlbflush.h | 6 +++++-
> 1 file changed, 5 insertions(+), 1 deletion(-)
>
> diff --git a/arch/s390/include/asm/tlbflush.h b/arch/s390/include/asm/tlbflush.h
> index 75491baa21974..0d53993534840 100644
> --- a/arch/s390/include/asm/tlbflush.h
> +++ b/arch/s390/include/asm/tlbflush.h
> @@ -103,9 +103,13 @@ static inline void __tlb_flush_mm_lazy(struct mm_struct * mm)
> * flush_tlb_range functions need to do the flush.
> */
> #define flush_tlb() do { } while (0)
> -#define flush_tlb_all() do { } while (0)
> #define flush_tlb_page(vma, addr) do { } while (0)
>
> +static inline void flush_tlb_all(void)
> +{
> + __tlb_flush_global();
> +}
> +
> static inline void flush_tlb_mm(struct mm_struct *mm)
> {
> __tlb_flush_mm_lazy(mm);
Acked-by: Alexander Gordeev <agordeev@...ux.ibm.com>
Powered by blists - more mailing lists