[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJD7tkZx-hzVg=TttC7hNSzUXPTMzi+EjUrdO8BdnswyDVEnxA@mail.gmail.com>
Date: Tue, 4 Jun 2024 13:55:47 -0700
From: Yosry Ahmed <yosryahmed@...gle.com>
To: "Vlastimil Babka (SUSE)" <vbabka@...nel.org>
Cc: Erhard Furtner <erhard_f@...lbox.org>, Yu Zhao <yuzhao@...gle.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, linuxppc-dev@...ts.ozlabs.org
Subject: Re: kswapd0: page allocation failure: order:0, mode:0x820(GFP_ATOMIC),
nodemask=(null),cpuset=/,mems_allowed=0 (Kernel v6.5.9, 32bit ppc)
On Tue, Jun 4, 2024 at 1:52 PM Vlastimil Babka (SUSE) <vbabka@...nel.org> wrote:
>
> On 6/4/24 1:24 AM, Yosry Ahmed wrote:
> > On Mon, Jun 3, 2024 at 3:13 PM Erhard Furtner <erhard_f@...lbox.org> wrote:
> >>
> >> On Sun, 2 Jun 2024 20:03:32 +0200
> >> Erhard Furtner <erhard_f@...lbox.org> wrote:
> >>
> >> > On Sat, 1 Jun 2024 00:01:48 -0600
> >> > Yu Zhao <yuzhao@...gle.com> wrote:
> >> >
> >> > > The OOM kills on both kernel versions seem to be reasonable to me.
> >> > >
> >> > > Your system has 2GB memory and it uses zswap with zsmalloc (which is
> >> > > good since it can allocate from the highmem zone) and zstd/lzo (which
> >> > > doesn't matter much). Somehow -- I couldn't figure out why -- it
> >> > > splits the 2GB into a 0.25GB DMA zone and a 1.75GB highmem zone:
> >> > >
> >> > > [ 0.000000] Zone ranges:
> >> > > [ 0.000000] DMA [mem 0x0000000000000000-0x000000002fffffff]
> >> > > [ 0.000000] Normal empty
> >> > > [ 0.000000] HighMem [mem 0x0000000030000000-0x000000007fffffff]
> >> > >
> >> > > The kernel can't allocate from the highmem zone -- only userspace and
> >> > > zsmalloc can. OOM kills were due to the low memory conditions in the
> >> > > DMA zone where the kernel itself failed to allocate from.
> >> > >
> >> > > Do you know a kernel version that doesn't have OOM kills while running
> >> > > the same workload? If so, could you send that .config to me? If not,
> >> > > could you try disabling CONFIG_HIGHMEM? (It might not help but I'm out
> >> > > of ideas at the moment.)
> >>
> >> Ok, the bisect I did actually revealed something meaningful:
> >>
> >> # git bisect good
> >> b8cf32dc6e8c75b712cbf638e0fd210101c22f17 is the first bad commit
> >> commit b8cf32dc6e8c75b712cbf638e0fd210101c22f17
> >> Author: Yosry Ahmed <yosryahmed@...gle.com>
> >> Date: Tue Jun 20 19:46:44 2023 +0000
> >>
> >> mm: zswap: multiple zpools support
> >
> > Thanks for bisecting. Taking a look at the thread, it seems like you
> > have a very limited area of memory to allocate kernel memory from. One
> > possible reason why that commit can cause an issue is because we will
> > have multiple instances of the zsmalloc slab caches 'zspage' and
> > 'zs_handle', which may contribute to fragmentation in slab memory.
> >
> > Do you have /proc/slabinfo from a good and a bad run by any chance?
> >
> > Also, could you check if the attached patch helps? It makes sure that
> > even when we use multiple zsmalloc zpools, we will use a single slab
> > cache of each type.
>
> As for reducing slab fragmentation/footprint, I would also recommend these
> changes to .config:
>
> CONFIG_SLAB_MERGE_DEFAULT=y - this will unify the separate zpool caches as
> well (but the patch still makes sense), but also many others
> CONFIG_RANDOM_KMALLOC_CACHES=n - no 16 separate copies of kmalloc caches
Yeah, I did send that patch separately, but I think the problem here
is probably fragmentation in the zsmalloc pools themselves, not the
slab caches used by them.
>
> although the slabinfo output doesn't seem to show
> CONFIG_RANDOM_KMALLOC_CACHES in action, weirdly. It was enabled in the
> config attached to the first mail.
>
> Both these changes mean giving up some mitigation against potentai
> lvulnerabilities. But it's not perfect anyway and the memory seems really
> tight here.
I think we may be able to fix the problem here if we address the
zsmalloc fragmentation. In regards to slab caches, the patch proposed
above should avoid the replication without enabling slab cache merging
in general.
Thanks for chiming in!
Powered by blists - more mailing lists