lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ca793c5a-5ba8-2fb3-a51d-8b028f5e3c22@gmail.com>
Date: Mon, 27 Jan 2025 21:23:53 +0100
From: Uros Bizjak <ubizjak@...il.com>
To: Sergey Senozhatsky <senozhatsky@...omium.org>,
 Andrew Morton <akpm@...ux-foundation.org>, Minchan Kim <minchan@...nel.org>,
 Johannes Weiner <hannes@...xchg.org>, Yosry Ahmed <yosry.ahmed@...ux.dev>,
 Nhat Pham <nphamcs@...il.com>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH 2/6] zsmalloc: make zspage lock preemptible



On 27. 01. 25 08:59, Sergey Senozhatsky wrote:
> Switch over from rwlock_t to a atomic_t variable that takes
> negative value when the page is under migration, or positive
> values when the page is used by zsmalloc users (object map,
> etc.)  Using a rwsem per-zspage is a little too memory heavy,
> a simple atomic_t should suffice, after all we only need to
> mark zspage as either used-for-write or used-for-read.  This
> is needed to make zsmalloc preemtible in the future.
> 
> Signed-off-by: Sergey Senozhatsky <senozhatsky@...omium.org>
> ---
>   mm/zsmalloc.c | 112 +++++++++++++++++++++++++++++---------------------
>   1 file changed, 66 insertions(+), 46 deletions(-)
> 
> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> index 817626a351f8..28a75bfbeaa6 100644
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -257,6 +257,9 @@ static inline void free_zpdesc(struct zpdesc *zpdesc)
>   	__free_page(page);
>   }
>   
> +#define ZS_PAGE_UNLOCKED	0
> +#define ZS_PAGE_WRLOCKED	-1
> +
>   struct zspage {
>   	struct {
>   		unsigned int huge:HUGE_BITS;
> @@ -269,7 +272,7 @@ struct zspage {
>   	struct zpdesc *first_zpdesc;
>   	struct list_head list; /* fullness list */
>   	struct zs_pool *pool;
> -	rwlock_t lock;
> +	atomic_t lock;
>   };
>   
>   struct mapping_area {
> @@ -290,11 +293,53 @@ static bool ZsHugePage(struct zspage *zspage)
>   	return zspage->huge;
>   }
>   
> -static void migrate_lock_init(struct zspage *zspage);
> -static void migrate_read_lock(struct zspage *zspage);
> -static void migrate_read_unlock(struct zspage *zspage);
> -static void migrate_write_lock(struct zspage *zspage);
> -static void migrate_write_unlock(struct zspage *zspage);
> +static void zspage_lock_init(struct zspage *zspage)
> +{
> +	atomic_set(&zspage->lock, ZS_PAGE_UNLOCKED);
> +}
> +
> +static void zspage_read_lock(struct zspage *zspage)
> +{
> +	atomic_t *lock = &zspage->lock;
> +	int old;
> +
> +	while (1) {
> +		old = atomic_read(lock);
> +		if (old == ZS_PAGE_WRLOCKED) {
> +			cpu_relax();
> +			continue;
> +		}
> +
> +		if (atomic_cmpxchg(lock, old, old + 1) == old)
> +			return;

You can use atomic_try_cmpxchg() here:

if (atomic_try_cmpxchg(lock, &old, old + 1))
         return;

> +
> +		cpu_relax();
> +	}
> +}
> +
> +static void zspage_read_unlock(struct zspage *zspage)
> +{
> +	atomic_dec(&zspage->lock);
> +}
> +
> +static void zspage_write_lock(struct zspage *zspage)
> +{
> +	atomic_t *lock = &zspage->lock;
> +	int old;
> +
> +	while (1) {
> +		old = atomic_cmpxchg(lock, ZS_PAGE_UNLOCKED, ZS_PAGE_WRLOCKED);
> +		if (old == ZS_PAGE_UNLOCKED)
> +			return;

Also, the above code can be rewritten as:

while (1) {
         old = ZS_PAGE_UNLOCKED;
         if (atomic_try_cmpxchg (lock, &old, ZS_PAGE_WRLOCKED))
                 return;	
> +
> +		cpu_relax();
> +	}
> +}

The above change will result in a slightly better generated asm.

Uros.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ