lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z6Z1_XxcoBnM9E2o@google.com>
Date: Fri, 7 Feb 2025 21:07:09 +0000
From: Yosry Ahmed <yosry.ahmed@...ux.dev>
To: Sergey Senozhatsky <senozhatsky@...omium.org>
Cc: Kairui Song <ryncsn@...il.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Minchan Kim <minchan@...nel.org>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCHv4 02/17] zram: do not use per-CPU compression streams

On Fri, Feb 07, 2025 at 03:12:59PM +0900, Sergey Senozhatsky wrote:
> On (25/02/07 11:56), Sergey Senozhatsky wrote:
> > struct zcomp_strm *zcomp_stream_get(struct zcomp *comp)
> > {
> >         for (;;) {
> >                 struct zcomp_strm *zstrm = raw_cpu_ptr(comp->stream);
> > 
> >                 /*
> >                  * Inspired by zswap
> >                  *
> >                  * stream is returned with ->mutex locked which prevents
> >                  * cpu_dead() from releasing this stream under us, however
> >                  * there is still a race window between raw_cpu_ptr() and
> >                  * mutex_lock(), during which we could have been migrated
> >                  * to a CPU that has already destroyed its stream.  If so
> >                  * then unlock and re-try on the current CPU.
> >                  */
> >                 mutex_lock(&zstrm->lock);
> >                 if (likely(zstrm->buffer))
> >                         return zstrm;
> >                 mutex_unlock(&zstrm->lock);
> >         }
> > }
> > 
> > void zcomp_stream_put(struct zcomp_strm *zstrm)
> > {
> >         mutex_unlock(&zstrm->lock);
> > }
> > 
> > int zcomp_cpu_dead(unsigned int cpu, struct hlist_node *node)
> > {
> >         struct zcomp *comp = hlist_entry(node, struct zcomp, node);
> >         struct zcomp_strm *zstrm = per_cpu_ptr(comp->stream, cpu);
> > 
> >         mutex_lock(&zstrm->lock);
> >         zcomp_strm_free(comp, zstrm);
> >         mutex_unlock(&zstrm->lock);
> >         return 0;
> > }
> 
> One downside of this is that this adds mutex to the locking graph and
> limits what zram can do.  In particular we cannot do GFP_NOIO zsmalloc
> handle allocations, because NOIO still does reclaim (doesn't reach the
> block layer) which grabs some locks internally and this looks a bit
> problematics:
> 	zram strm mutex -> zsmalloc GFP_NOIO -> reclaim
> vs
> 	reclaim -> zram strm mutex -> zsmalloc
> 
> GFP_NOWAIT allocation has lower success chances.

I assume this problem is unique to zram and not zswap because zram can
be used with normal IO (and then recurse through reclaim), while zswap
is only reachable thorugh reclaim (which cannot recurse)?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ