lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPpoddc5ge0myLfN8burL9fEUjc0oaB0C8Yc3_J923hd_O9u4A@mail.gmail.com>
Date: Fri, 7 Jun 2024 10:00:43 +0900
From: Takero Funaki <flintglass@...il.com>
To: Yosry Ahmed <yosryahmed@...gle.com>
Cc: Johannes Weiner <hannes@...xchg.org>, Nhat Pham <nphamcs@...il.com>, 
	Chengming Zhou <chengming.zhou@...ux.dev>, Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org, 
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm: zswap: limit number of zpools based on CPU and RAM

2024年6月7日(金) 2:46 Yosry Ahmed <yosryahmed@...gle.com>:

>
> There are a lot of magic numbers in this patch, and it seems like it's
> all based on theory. I don't object to making the number of zpools
> dynamic in some way, but unless we do it in a data-driven way where we
> understand the implications, I think the added complexity and
> inconsistency is not justified.
>
> For example, 2*CPU zpools is an overkill and will cause a lot of
> fragmentation. We use 32 zpools right now for machines with 100s of
> CPUs. I know that you are keeping 32 as the limit, but why 2*CPUs if
> nr_cpus <= 16?
>
> Also, the limitation based on memory size assumes that zsmalloc is the
> only allocator used by zswap, which is unfortunately not the case.
>
> The current implementation using 32 zpools all the time is not
> perfect, and I did write a patch to make it at least be min(nr_cpus,
> 32), but it is simple and it works. Complexity should be justified.
>

Thanks for your comments.
I agree the 2*cpu is too much. it was conservatively chosen assuming
1/2 contention while all cores are accessing zswap. Much smaller
factor or non-linear scale as your comments in the main thread would
be better.

I found your patch from the main thread.
One point I'm afraid, this hashing will fail if nr_zswap_zpools is 1
or is not rounded to order of 2. hash_ptr crashes when bit is 0.

> +   return entry->pool->zpools[hash_ptr(entry, ilog2(nr_zswap_zpools))];

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ