[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <m2fr96wjtw.wl-thehajime@gmail.com>
Date: Fri, 19 Dec 2025 09:51:55 -0600
From: Hajime Tazaki <thehajime@...il.com>
To: daniel@...f.com
Cc: joshua.hahnjy@...il.com,
akpm@...ux-foundation.org,
linux@...ck-us.net,
jackmanb@...gle.com,
hannes@...xchg.org,
mhocko@...e.com,
surenb@...gle.com,
vbabka@...e.cz,
ziy@...dia.com,
linux-kernel@...r.kernel.org,
linux-mm@...ck.org,
kernel-team@...a.com
Subject: Re: [PATCH v2] mm/page_alloc: Report 1 as zone_batchsize for !CONFIG_MMU
Hello Daniel,
On Thu, 18 Dec 2025 06:30:42 -0600,
Daniel Palmer wrote:
>
> Hi Joshua,
>
> On Thu, 18 Dec 2025 at 17:32, Joshua Hahn <joshua.hahnjy@...il.com> wrote:
> >
> > Commit 2783088ef24e ("mm/page_alloc: prevent reporting pcp->batch = 0")
> > moved the error handling (0-handling) of zone_batchsize from its
> > callers to inside the function. However, the commit left out the error
> > handling for the NOMMU case, leading to deadlocks on NOMMU systems.
> >
> > For NOMMU systems, return 1 instead of 0 for zone_batchsize, which restores
> > the previous deadlock-free behavior.
>
> Tested this on my 68000 setup, filled the memory to cause an OOM and I
> got OOM instead of deadlock as expected.
>
> Tested-by: Daniel Palmer <daniel@...ngy.jp>
>
> FWIW There was a BoF about NOMMU at LPC last week and I did mention to
> the people presenting that seem to be using NOMMU in real world
> applications that NOMMU was broken in mainline. I hoped they would
> have chimed in on this..
I tested with UML with nommu extension (currently out of kernel *1)
and reproduced the issue with a crafted program causing OOM.
without patch it indeed hangs up with losing console access and this
patch fixes with a proper failure message like below;
oom: page allocation failure: order:12, mode:0xcc0(GFP_KERNEL), nodemask=(null)
CPU: 0 UID: 0 PID: 32 Comm: oom Not tainted 6.18.0-12966-gc43a4f128407-dirty #223 NONE
Stack:
60a8fb80 604a246e 603b9569 00000001
ffffff00 604a246e 6002440d 604a1479
60a8fbb0 6002bbb3 60556910 00000000
Call Trace:
[<6002440d>] ? _printk+0x0/0x5b
[<6002df89>] show_stack+0x11c/0x12b
[<603b9569>] ? dump_stack_print_info+0x0/0x12f
[<6002440d>] ? _printk+0x0/0x5b
[<6002bbb3>] dump_stack_lvl+0x65/0x80
[<6002bbec>] dump_stack+0x1e/0x20
[<600e0c13>] warn_alloc+0x118/0x195
[<60083ae0>] ? __mutex_trylock+0x16/0x1e
(snip)
Tested-by: Hajime Tazaki <thehajime@...il.com>
*1 https://lore.kernel.org/all/cover.1762588860.git.thehajime@gmail.com/
-- Hajime
Powered by blists - more mailing lists