lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4f3c6c96-3bea-4369-a2de-e3c559505bba@bytedance.com>
Date: Tue, 20 Feb 2024 12:51:46 +0800
From: Chengming Zhou <zhouchengming@...edance.com>
To: Sergey Senozhatsky <senozhatsky@...omium.org>
Cc: nphamcs@...il.com, yosryahmed@...gle.com, Minchan Kim
 <minchan@...nel.org>, Andrew Morton <akpm@...ux-foundation.org>,
 hannes@...xchg.org, linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/3] mm/zsmalloc: remove migrate_write_lock_nested()

On 2024/2/20 12:48, Sergey Senozhatsky wrote:
> On (24/02/19 13:33), Chengming Zhou wrote:
>>  static void migrate_write_unlock(struct zspage *zspage)
>>  {
>>  	write_unlock(&zspage->lock);
>> @@ -2003,19 +1997,17 @@ static unsigned long __zs_compact(struct zs_pool *pool,
>>  			dst_zspage = isolate_dst_zspage(class);
>>  			if (!dst_zspage)
>>  				break;
>> -			migrate_write_lock(dst_zspage);
>>  		}
>>  
>>  		src_zspage = isolate_src_zspage(class);
>>  		if (!src_zspage)
>>  			break;
>>  
>> -		migrate_write_lock_nested(src_zspage);
>> -
>> +		migrate_write_lock(src_zspage);
>>  		migrate_zspage(pool, src_zspage, dst_zspage);
>> -		fg = putback_zspage(class, src_zspage);
>>  		migrate_write_unlock(src_zspage);
>>  
>> +		fg = putback_zspage(class, src_zspage);
> 
> Hmm. Lockless putback doesn't look right to me. We modify critical
> zspage fileds in putback_zspage().

Which I think is protected by pool->lock, right? We already held it.

> 
>>  		if (fg == ZS_INUSE_RATIO_0) {
>>  			free_zspage(pool, class, src_zspage);
>>  			pages_freed += class->pages_per_zspage;
>> @@ -2025,7 +2017,6 @@ static unsigned long __zs_compact(struct zs_pool *pool,
>>  		if (get_fullness_group(class, dst_zspage) == ZS_INUSE_RATIO_100
>>  		    || spin_is_contended(&pool->lock)) {
>>  			putback_zspage(class, dst_zspage);
>> -			migrate_write_unlock(dst_zspage);
>>  			dst_zspage = NULL;
>>  
>>  			spin_unlock(&pool->lock);
>> @@ -2034,15 +2025,12 @@ static unsigned long __zs_compact(struct zs_pool *pool,
>>  		}
>>  	}
>>  
>> -	if (src_zspage) {
>> +	if (src_zspage)
>>  		putback_zspage(class, src_zspage);
>> -		migrate_write_unlock(src_zspage);
>> -	}
>>  
>> -	if (dst_zspage) {
>> +	if (dst_zspage)
>>  		putback_zspage(class, dst_zspage);
>> -		migrate_write_unlock(dst_zspage);
>> -	}

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ