lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Fri, 7 Jun 2024 13:49:51 +0800
From: Chengming Zhou <chengming.zhou@...ux.dev>
To: Yosry Ahmed <yosryahmed@...gle.com>, Takero Funaki <flintglass@...il.com>
Cc: Johannes Weiner <hannes@...xchg.org>, Nhat Pham <nphamcs@...il.com>,
 Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
 linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm: zswap: limit number of zpools based on CPU and RAM

On 2024/6/7 12:58, Yosry Ahmed wrote:
> On Thu, Jun 6, 2024 at 6:01 PM Takero Funaki <flintglass@...il.com> wrote:
>>
>> 2024年6月7日(金) 2:46 Yosry Ahmed <yosryahmed@...gle.com>:
>>
>>>
>>> There are a lot of magic numbers in this patch, and it seems like it's
>>> all based on theory. I don't object to making the number of zpools
>>> dynamic in some way, but unless we do it in a data-driven way where we
>>> understand the implications, I think the added complexity and
>>> inconsistency is not justified.
>>>
>>> For example, 2*CPU zpools is an overkill and will cause a lot of
>>> fragmentation. We use 32 zpools right now for machines with 100s of
>>> CPUs. I know that you are keeping 32 as the limit, but why 2*CPUs if
>>> nr_cpus <= 16?
>>>
>>> Also, the limitation based on memory size assumes that zsmalloc is the
>>> only allocator used by zswap, which is unfortunately not the case.
>>>
>>> The current implementation using 32 zpools all the time is not
>>> perfect, and I did write a patch to make it at least be min(nr_cpus,
>>> 32), but it is simple and it works. Complexity should be justified.
>>>
>>
>> Thanks for your comments.
>> I agree the 2*cpu is too much. it was conservatively chosen assuming
>> 1/2 contention while all cores are accessing zswap. Much smaller
>> factor or non-linear scale as your comments in the main thread would
>> be better.
> 
> Chengming is currently experimenting with fixing the lock contention
> problem in zsmalloc by making the lock more granular. Based on the
> data he finds, we may be able to just drop the multiple zpools patch
> from zswap.

Hope so, not sure now, will test and compare one pool with multiple pools.

> 
> I'd wait for his findings before investing more into improving this.

Ok, I will get back with code and some testing data a few days later.

Thanks.

> 
>>
>> I found your patch from the main thread.
>> One point I'm afraid, this hashing will fail if nr_zswap_zpools is 1
>> or is not rounded to order of 2. hash_ptr crashes when bit is 0.
> 
> Yeah that patch was just for experimenting, I did not test it well.
> Thanks for taking a look.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ