lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e5b65d4e-55ff-4b9a-9add-563738b7193f@bytedance.com>
Date: Fri, 16 Feb 2024 16:47:38 +0800
From: Chengming Zhou <zhouchengming@...edance.com>
To: Yosry Ahmed <yosryahmed@...gle.com>
Cc: Johannes Weiner <hannes@...xchg.org>,
 Andrew Morton <akpm@...ux-foundation.org>, Nhat Pham <nphamcs@...il.com>,
 linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 1/2] mm/zswap: global lru and shrinker shared by all
 zswap_pools

On 2024/2/15 03:20, Yosry Ahmed wrote:
> On Wed, Feb 14, 2024 at 08:54:37AM +0000, Chengming Zhou wrote:
>> Dynamic zswap_pool creation may create/reuse to have multiple
>> zswap_pools in a list, only the first will be current used.
>>
>> Each zswap_pool has its own lru and shrinker, which is not
>> necessary and has its problem:
>>
>> 1. When memory has pressure, all shrinker of zswap_pools will
>>    try to shrink its own lru, there is no order between them.
>>
>> 2. When zswap limit hit, only the last zswap_pool's shrink_work
>>    will try to shrink its lru, which is inefficient.
> 
> I think the rationale here was to try and empty the old pool first so
> that we can completely drop it. However, since we only support exclusive
> loads now, the LRU ordering should be entirely decided by the order of
> stores, so I think the oldest entries on the LRU will naturally be the
> from the oldest pool, right?
> 
> Probably worth stating this.

Right, will add your this part in the commit message.

> 
>>
>> Anyway, having a global lru and shrinker shared by all zswap_pools
>> is better and efficient.
>>
>> Signed-off-by: Chengming Zhou <zhouchengming@...edance.com>
> 
> LGTM with a few comments, with those:
> Acked-by: Yosry Ahmed <yosryahmed@...gle.com>
> 
>> ---
>>  mm/zswap.c | 170 +++++++++++++++++++++++--------------------------------------
>>  1 file changed, 65 insertions(+), 105 deletions(-)
>>
>> diff --git a/mm/zswap.c b/mm/zswap.c
>> index 62fe307521c9..dbff67d7e1c7 100644
>> --- a/mm/zswap.c
>> +++ b/mm/zswap.c
>> @@ -176,14 +176,18 @@ struct zswap_pool {
>>  	struct kref kref;
>>  	struct list_head list;
>>  	struct work_struct release_work;
>> -	struct work_struct shrink_work;
>>  	struct hlist_node node;
>>  	char tfm_name[CRYPTO_MAX_ALG_NAME];
>> +};
>> +
>> +static struct {
>>  	struct list_lru list_lru;
>> -	struct mem_cgroup *next_shrink;
>> -	struct shrinker *shrinker;
>>  	atomic_t nr_stored;
>> -};
>> +	struct shrinker *shrinker;
>> +	struct work_struct shrink_work;
>> +	struct mem_cgroup *next_shrink;
>> +	spinlock_t shrink_lock;
> 
> The lock is exclusively protecting next_shrink, right? Perhaps we should
> rename it to next_shrink_lock or at least document this.

Ok, I will add a comment to it.

> 
>> +} zswap;
>>  
>>  /*
>>   * struct zswap_entry
>> @@ -301,9 +305,6 @@ static void zswap_update_total_size(void)
>>  * pool functions
>>  **********************************/
>>  
>> -static void zswap_alloc_shrinker(struct zswap_pool *pool);
>> -static void shrink_worker(struct work_struct *w);
>> -
>>  static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
>>  {
>>  	int i;
>> @@ -353,30 +354,16 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
>>  	if (ret)
>>  		goto error;
>>  
>> -	zswap_alloc_shrinker(pool);
>> -	if (!pool->shrinker)
>> -		goto error;
>> -
>> -	pr_debug("using %s compressor\n", pool->tfm_name);
> 
> nit: the next patch introduces a new failure case between this debug
> print and zswap_pool_debug() below, so it will become possible again
> that we get one and not the other. Not a big deal though.

Yes, not a big deal.

Thanks!

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ