lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f1f527d6-2866-4a64-8018-453c468c88ab@kernel.org>
Date: Tue, 4 Jun 2024 23:00:27 +0200
From: "Vlastimil Babka (SUSE)" <vbabka@...nel.org>
To: Yosry Ahmed <yosryahmed@...gle.com>, Yu Zhao <yuzhao@...gle.com>
Cc: Erhard Furtner <erhard_f@...lbox.org>, linux-mm@...ck.org,
 linux-kernel@...r.kernel.org, linuxppc-dev@...ts.ozlabs.org,
 Johannes Weiner <hannes@...xchg.org>, Nhat Pham <nphamcs@...il.com>,
 Chengming Zhou <chengming.zhou@...ux.dev>,
 Sergey Senozhatsky <senozhatsky@...omium.org>,
 Minchan Kim <minchan@...nel.org>
Subject: Re: kswapd0: page allocation failure: order:0,
 mode:0x820(GFP_ATOMIC), nodemask=(null),cpuset=/,mems_allowed=0 (Kernel
 v6.5.9, 32bit ppc)

On 6/4/24 8:01 PM, Yosry Ahmed wrote:
> On Tue, Jun 4, 2024 at 10:54 AM Yu Zhao <yuzhao@...gle.com> wrote:
>> There was a lot of user memory in the DMA zone. So at a point the
>> highmem zone was full and allocation fallback happened.
>>
>> The problem with zone fallback is that recent allocations go into
>> lower zones, meaning they are further back on the LRU list. This
>> applies to both user memory and zsmalloc memory -- the latter has a
>> writeback LRU. On top of this, neither the zswap shrinker nor the
>> zsmalloc shrinker (compaction) is zone aware. So page reclaim might
>> have trouble hitting the right target zone.
> 
> I see what you mean. In this case, yeah I think the internal
> fragmentation in the zsmalloc pools may be the reason behind the
> problem.
> 
> How many CPUs does this machine have? I am wondering if 32 can be an
> overkill for small machines, perhaps the number of pools should be
> max(nr_cpus, 32)?
> 
> Alternatively, the number of pools should scale with the memory size
> in some way, such that we only increase fragmentation when it's
> tolerable.

Sounds like a good idea to me, maybe a combination of both. No point in
trying to scale if there's no benefit and only downside of more memory
consumption.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ