lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <u2o6c2zgtsol4uc53tnvi6xeksmhjjgo7xwrs4vg4wade52mbk@3iss25tlosau>
Date: Fri, 7 Feb 2025 15:12:59 +0900
From: Sergey Senozhatsky <senozhatsky@...omium.org>
To: Yosry Ahmed <yosry.ahmed@...ux.dev>
Cc: Kairui Song <ryncsn@...il.com>, 
	Andrew Morton <akpm@...ux-foundation.org>, Minchan Kim <minchan@...nel.org>, linux-mm@...ck.org, 
	linux-kernel@...r.kernel.org, Sergey Senozhatsky <senozhatsky@...omium.org>
Subject: Re: [PATCHv4 02/17] zram: do not use per-CPU compression streams

On (25/02/07 11:56), Sergey Senozhatsky wrote:
> struct zcomp_strm *zcomp_stream_get(struct zcomp *comp)
> {
>         for (;;) {
>                 struct zcomp_strm *zstrm = raw_cpu_ptr(comp->stream);
> 
>                 /*
>                  * Inspired by zswap
>                  *
>                  * stream is returned with ->mutex locked which prevents
>                  * cpu_dead() from releasing this stream under us, however
>                  * there is still a race window between raw_cpu_ptr() and
>                  * mutex_lock(), during which we could have been migrated
>                  * to a CPU that has already destroyed its stream.  If so
>                  * then unlock and re-try on the current CPU.
>                  */
>                 mutex_lock(&zstrm->lock);
>                 if (likely(zstrm->buffer))
>                         return zstrm;
>                 mutex_unlock(&zstrm->lock);
>         }
> }
> 
> void zcomp_stream_put(struct zcomp_strm *zstrm)
> {
>         mutex_unlock(&zstrm->lock);
> }
> 
> int zcomp_cpu_dead(unsigned int cpu, struct hlist_node *node)
> {
>         struct zcomp *comp = hlist_entry(node, struct zcomp, node);
>         struct zcomp_strm *zstrm = per_cpu_ptr(comp->stream, cpu);
> 
>         mutex_lock(&zstrm->lock);
>         zcomp_strm_free(comp, zstrm);
>         mutex_unlock(&zstrm->lock);
>         return 0;
> }

One downside of this is that this adds mutex to the locking graph and
limits what zram can do.  In particular we cannot do GFP_NOIO zsmalloc
handle allocations, because NOIO still does reclaim (doesn't reach the
block layer) which grabs some locks internally and this looks a bit
problematics:
	zram strm mutex -> zsmalloc GFP_NOIO -> reclaim
vs
	reclaim -> zram strm mutex -> zsmalloc

GFP_NOWAIT allocation has lower success chances.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ