[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20250507170554.53a29e42d3edda8a9f072334@linux-foundation.org>
Date: Wed, 7 May 2025 17:05:54 -0700
From: Andrew Morton <akpm@...ux-foundation.org>
To: Alexander Gordeev <agordeev@...ux.ibm.com>
Cc: Andrey Ryabinin <ryabinin.a.a@...il.com>, Daniel Axtens
<dja@...ens.net>, linux-kernel@...r.kernel.org, linux-mm@...ck.org,
kasan-dev@...glegroups.com, linux-s390@...r.kernel.org,
stable@...r.kernel.org
Subject: Re: [PATCH v5 1/1] kasan: Avoid sleepable page allocation from
atomic context
On Wed, 7 May 2025 14:48:03 +0200 Alexander Gordeev <agordeev@...ux.ibm.com> wrote:
> apply_to_pte_range() enters the lazy MMU mode and then invokes
> kasan_populate_vmalloc_pte() callback on each page table walk
> iteration. However, the callback can go into sleep when trying
> to allocate a single page, e.g. if an architecutre disables
> preemption on lazy MMU mode enter.
>
> On s390 if make arch_enter_lazy_mmu_mode() -> preempt_enable()
> and arch_leave_lazy_mmu_mode() -> preempt_disable(), such crash
> occurs:
>
> [ 553.332108] preempt_count: 1, expected: 0
> [ 553.332117] no locks held by multipathd/2116.
> [ 553.332128] CPU: 24 PID: 2116 Comm: multipathd Kdump: loaded Tainted:
> [ 553.332139] Hardware name: IBM 3931 A01 701 (LPAR)
> [ 553.332146] Call Trace:
> [ 553.332152] [<00000000158de23a>] dump_stack_lvl+0xfa/0x150
> [ 553.332167] [<0000000013e10d12>] __might_resched+0x57a/0x5e8
> [ 553.332178] [<00000000144eb6c2>] __alloc_pages+0x2ba/0x7c0
> [ 553.332189] [<00000000144d5cdc>] __get_free_pages+0x2c/0x88
> [ 553.332198] [<00000000145663f6>] kasan_populate_vmalloc_pte+0x4e/0x110
> [ 553.332207] [<000000001447625c>] apply_to_pte_range+0x164/0x3c8
> [ 553.332218] [<000000001448125a>] apply_to_pmd_range+0xda/0x318
> [ 553.332226] [<000000001448181c>] __apply_to_page_range+0x384/0x768
> [ 553.332233] [<0000000014481c28>] apply_to_page_range+0x28/0x38
> [ 553.332241] [<00000000145665da>] kasan_populate_vmalloc+0x82/0x98
> [ 553.332249] [<00000000144c88d0>] alloc_vmap_area+0x590/0x1c90
> [ 553.332257] [<00000000144ca108>] __get_vm_area_node.constprop.0+0x138/0x260
> [ 553.332265] [<00000000144d17fc>] __vmalloc_node_range+0x134/0x360
> [ 553.332274] [<0000000013d5dbf2>] alloc_thread_stack_node+0x112/0x378
> [ 553.332284] [<0000000013d62726>] dup_task_struct+0x66/0x430
> [ 553.332293] [<0000000013d63962>] copy_process+0x432/0x4b80
> [ 553.332302] [<0000000013d68300>] kernel_clone+0xf0/0x7d0
> [ 553.332311] [<0000000013d68bd6>] __do_sys_clone+0xae/0xc8
> [ 553.332400] [<0000000013d68dee>] __s390x_sys_clone+0xd6/0x118
> [ 553.332410] [<0000000013c9d34c>] do_syscall+0x22c/0x328
> [ 553.332419] [<00000000158e7366>] __do_syscall+0xce/0xf0
> [ 553.332428] [<0000000015913260>] system_call+0x70/0x98
Is this a crash, or a warning? From the description I suspect it was a
sleep-while-atomic warning?
Can we please have the complete dmesg output?
Powered by blists - more mailing lists