lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z8n7gjggkyf9qLMy@google.com>
Date: Thu, 6 Mar 2025 19:46:10 +0000
From: Yosry Ahmed <yosry.ahmed@...ux.dev>
To: Kanchana P Sridhar <kanchana.p.sridhar@...el.com>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org, hannes@...xchg.org,
	nphamcs@...il.com, chengming.zhou@...ux.dev, usamaarif642@...il.com,
	ryan.roberts@....com, 21cnbao@...il.com,
	ying.huang@...ux.alibaba.com, akpm@...ux-foundation.org,
	linux-crypto@...r.kernel.org, herbert@...dor.apana.org.au,
	davem@...emloft.net, clabbe@...libre.com, ardb@...nel.org,
	ebiggers@...gle.com, surenb@...gle.com, kristen.c.accardi@...el.com,
	wajdi.k.feghali@...el.com, vinodh.gopal@...el.com
Subject: Re: [PATCH v8 12/14] mm: zswap: Simplify acomp_ctx resource
 allocation/deletion and mutex lock usage.

On Thu, Mar 06, 2025 at 07:35:36PM +0000, Yosry Ahmed wrote:
> On Mon, Mar 03, 2025 at 12:47:22AM -0800, Kanchana P Sridhar wrote:
> > This patch modifies the acomp_ctx resources' lifetime to be from pool
> > creation to deletion. A "bool __online" and "u8 nr_reqs" are added to
> > "struct crypto_acomp_ctx" which simplify a few things:
> > 
> > 1) zswap_pool_create() will initialize all members of each percpu acomp_ctx
> >    to 0 or NULL and only then initialize the mutex.
> > 2) CPU hotplug will set nr_reqs to 1, allocate resources and set __online
> >    to true, without locking the mutex.
> > 3) CPU hotunplug will lock the mutex before setting __online to false. It
> >    will not delete any resources.
> > 4) acomp_ctx_get_cpu_lock() will lock the mutex, then check if __online
> >    is true, and if so, return the mutex for use in zswap compress and
> >    decompress ops.
> > 5) CPU onlining after offlining will simply check if either __online or
> >    nr_reqs are non-0, and return 0 if so, without re-allocating the
> >    resources.
> > 6) zswap_pool_destroy() will call a newly added zswap_cpu_comp_dealloc() to
> >    delete the acomp_ctx resources.
> > 7) Common resource deletion code in case of zswap_cpu_comp_prepare()
> >    errors, and for use in zswap_cpu_comp_dealloc(), is factored into a new
> >    acomp_ctx_dealloc().
> > 
> > The CPU hot[un]plug callback functions are moved to "pool functions"
> > accordingly.
> > 
> > The per-cpu memory cost of not deleting the acomp_ctx resources upon CPU
> > offlining, and only deleting them when the pool is destroyed, is as follows:
> > 
> >     IAA with batching: 64.8 KB
> >     Software compressors: 8.2 KB
> > 
> > I would appreciate code review comments on whether this memory cost is
> > acceptable, for the latency improvement that it provides due to a faster
> > reclaim restart after a CPU hotunplug-hotplug sequence - all that the
> > hotplug code needs to do is to check if acomp_ctx->nr_reqs is non-0, and
> > if so, set __online to true and return, and reclaim can proceed.
> 
> I like the idea of allocating the resources on memory hotplug but
> leaving them allocated until the pool is torn down. It avoids allocating
> unnecessary memory if some CPUs are never onlined, but it simplifies
> things because we don't have to synchronize against the resources being
> freed in CPU offline.
> 
> The only case that would suffer from this AFAICT is if someone onlines
> many CPUs, uses them once, and then offline them and not use them again.
> I am not familiar with CPU hotplug use cases so I can't tell if that's
> something people do, but I am inclined to agree with this
> simplification.
> 
> > 
> > Signed-off-by: Kanchana P Sridhar <kanchana.p.sridhar@...el.com>
> > ---
> >  mm/zswap.c | 273 +++++++++++++++++++++++++++++++++++------------------
> >  1 file changed, 182 insertions(+), 91 deletions(-)
> > 
> > diff --git a/mm/zswap.c b/mm/zswap.c
> > index 10f2a16e7586..cff96df1df8b 100644
> > --- a/mm/zswap.c
> > +++ b/mm/zswap.c
> > @@ -144,10 +144,12 @@ bool zswap_never_enabled(void)
> >  struct crypto_acomp_ctx {
> >  	struct crypto_acomp *acomp;
> >  	struct acomp_req *req;
> > -	struct crypto_wait wait;
> 
> Is there a reason for moving this? If not please avoid unrelated changes.
> 
> >  	u8 *buffer;
> > +	u8 nr_reqs;
> > +	struct crypto_wait wait;
> >  	struct mutex mutex;
> >  	bool is_sleepable;
> > +	bool __online;
> 
> I don't believe we need this.
> 
> If we are not freeing resources during CPU offlining, then we do not
> need a CPU offline callback and acomp_ctx->__online serves no purpose.
> 
> The whole point of synchronizing between offlining and
> compress/decompress operations is to avoid UAF. If offlining does not
> free resources, then we can hold the mutex directly in the
> compress/decompress path and drop the hotunplug callback completely.
> 
> I also believe nr_reqs can be dropped from this patch, as it seems like
> it's only used know when to set __online.
> 
> >  };
> >  
> >  /*
> > @@ -246,6 +248,122 @@ static inline struct xarray *swap_zswap_tree(swp_entry_t swp)
> >  **********************************/
> >  static void __zswap_pool_empty(struct percpu_ref *ref);
> >  
> > +static void acomp_ctx_dealloc(struct crypto_acomp_ctx *acomp_ctx)
> > +{
> > +	if (!IS_ERR_OR_NULL(acomp_ctx) && acomp_ctx->nr_reqs) {

Also, we can just return early here to save an indentation level:

	if (IS_ERR_OR_NULL(acomp_ctx) || !acomp_ctx->nr_reqs)
		return;

> > +
> > +		if (!IS_ERR_OR_NULL(acomp_ctx->req))
> > +			acomp_request_free(acomp_ctx->req);
> > +		acomp_ctx->req = NULL;
> > +
> > +		kfree(acomp_ctx->buffer);
> > +		acomp_ctx->buffer = NULL;
> > +
> > +		if (!IS_ERR_OR_NULL(acomp_ctx->acomp))
> > +			crypto_free_acomp(acomp_ctx->acomp);
> > +
> > +		acomp_ctx->nr_reqs = 0;
> > +	}
> > +}
> 
> Please split the pure refactoring into a separate patch to make it
> easier to review.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ