lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJD7tkbhWYzx=6YmzAh0F+cK-_Bn8mPOH7gMbQS7YVXmaFSgFg@mail.gmail.com>
Date: Wed, 5 Jun 2024 16:41:31 -0700
From: Yosry Ahmed <yosryahmed@...gle.com>
To: Erhard Furtner <erhard_f@...lbox.org>
Cc: Yu Zhao <yuzhao@...gle.com>, linux-mm@...ck.org, linux-kernel@...r.kernel.org, 
	linuxppc-dev@...ts.ozlabs.org, Johannes Weiner <hannes@...xchg.org>, 
	Nhat Pham <nphamcs@...il.com>, Chengming Zhou <chengming.zhou@...ux.dev>, 
	Sergey Senozhatsky <senozhatsky@...omium.org>, Minchan Kim <minchan@...nel.org>, 
	"Vlastimil Babka (SUSE)" <vbabka@...nel.org>
Subject: Re: kswapd0: page allocation failure: order:0, mode:0x820(GFP_ATOMIC),
 nodemask=(null),cpuset=/,mems_allowed=0 (Kernel v6.5.9, 32bit ppc)

On Wed, Jun 5, 2024 at 4:04 PM Erhard Furtner <erhard_f@...lbox.org> wrote:
>
> On Tue, 4 Jun 2024 20:03:27 -0700
> Yosry Ahmed <yosryahmed@...gle.com> wrote:
>
> > Could you check if the attached patch helps? It basically changes the
> > number of zpools from 32 to min(32, nr_cpus).
>
> Thanks! The patch does not fix the issue but it helps.
>
> Means I still get to see the 'kswapd0: page allocation failure' in the dmesg, a 'stress-ng-vm: page allocation failure' later on, another kswapd0 error later on, etc. _but_ the machine keeps running the workload, stays usable via VNC and I get no hard crash any longer.
>
> Without patch kswapd0 error and hard crash (need to power-cycle) <3min. With patch several kswapd0 errors but running for 2 hrs now. I double checked this to be sure.

Thanks for trying this out. This is interesting, so even two zpools is
too much fragmentation for your use case.

I think there are multiple ways to go forward here:
(a) Make the number of zpools a config option, leave the default as
32, but allow special use cases to set it to 1 or similar. This is
probably not preferable because it is not clear to users how to set
it, but the idea is that no one will have to set it except special use
cases such as Erhard's (who will want to set it to 1 in this case).

(b) Make the number of zpools scale linearly with the number of CPUs.
Maybe something like nr_cpus/4 or nr_cpus/8. The problem with this
approach is that with a large number of CPUs, too many zpools will
start having diminishing returns. Fragmentation will keep increasing,
while the scalability/concurrency gains will diminish.

(c) Make the number of zpools scale logarithmically with the number of
CPUs. Maybe something like 4log2(nr_cpus). This will keep the number
of zpools from increasing too much and close to the status quo. The
problem is that at a small number of CPUs (e.g. 2), 4log2(nr_cpus)
will actually give a nr_zpools > nr_cpus. So we will need to come up
with a more fancy magic equation (e.g. 4log2(nr_cpus/4)).

(d) Make the number of zpools scale linearly with memory. This makes
more sense than scaling with CPUs because increasing the number of
zpools increases fragmentation, so it makes sense to limit it by the
available memory. This is also more consistent with other magic
numbers we have (e.g. SWAP_ADDRESS_SPACE_SHIFT).

The problem is that unlike zswap trees, the zswap pool is not
connected to the swapfile size, so we don't have an indication for how
much memory will be in the zswap pool. We can scale the number of
zpools with the entire memory on the machine during boot, but this
seems like it would be difficult to figure out, and will not take into
consideration memory hotplugging and the zswap global limit changing.

(e) A creative mix of the above.

(f) Something else (probably simpler).

I am personally leaning toward (c), but I want to hear the opinions of
other people here. Yu, Vlastimil, Johannes, Nhat? Anyone else?

In the long-term, I think we may want to address the lock contention
in zsmalloc itself instead of zswap spawning multiple zpools.

>
> The patch did not apply cleanly on v6.9.3 so I applied it on v6.10-rc2. dmesg of the current v6.10-rc2 run attached.
>
> Regards,
> Erhard

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ