lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJD7tkZ+UeXXvFc+M9JssooW_0rW-GVgUMo3GVcSMCxQhndZuA@mail.gmail.com>
Date: Tue, 7 Jan 2025 21:34:15 -0800
From: Yosry Ahmed <yosryahmed@...gle.com>
To: Chengming Zhou <chengming.zhou@...ux.dev>, Nhat Pham <nphamcs@...il.com>
Cc: Johannes Weiner <hannes@...xchg.org>, Andrew Morton <akpm@...ux-foundation.org>, 
	Vitaly Wool <vitalywool@...il.com>, Barry Song <baohua@...nel.org>, 
	Sam Sun <samsun1006219@...il.com>, "linux-mm@...ck.org" <linux-mm@...ck.org>, 
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>, 
	"stable@...r.kernel.org" <stable@...r.kernel.org>, 
	"Sridhar, Kanchana P" <kanchana.p.sridhar@...el.com>
Subject: Re: [PATCH v2 2/2] mm: zswap: disable migration while using per-CPU acomp_ctx

On Tue, Jan 7, 2025 at 9:00 PM Chengming Zhou <chengming.zhou@...ux.dev> wrote:
>
> On 2025/1/8 12:46, Nhat Pham wrote:
> > On Wed, Jan 8, 2025 at 9:34 AM Yosry Ahmed <yosryahmed@...gle.com> wrote:
> >>
> >>
> >> Actually, using the mutex to protect against CPU hotunplug is not too
> >> complicated. The following diff is one way to do it (lightly tested).
> >> Johannes, Nhat, any preferences between this patch (disabling
> >> migration) and the following diff?
> >
> > I mean if this works, this over migration diasbling any day? :)
> >
> >>
> >> diff --git a/mm/zswap.c b/mm/zswap.c
> >> index f6316b66fb236..4d6817c679a54 100644
> >> --- a/mm/zswap.c
> >> +++ b/mm/zswap.c
> >> @@ -869,17 +869,40 @@ static int zswap_cpu_comp_dead(unsigned int cpu,
> >> struct hlist_node *node)
> >>          struct zswap_pool *pool = hlist_entry(node, struct zswap_pool, node);
> >>          struct crypto_acomp_ctx *acomp_ctx = per_cpu_ptr(pool->acomp_ctx, cpu);
> >>
> >> +       mutex_lock(&acomp_ctx->mutex);
> >>          if (!IS_ERR_OR_NULL(acomp_ctx)) {
> >>                  if (!IS_ERR_OR_NULL(acomp_ctx->req))
> >>                          acomp_request_free(acomp_ctx->req);
> >> +               acomp_ctx->req = NULL;
> >>                  if (!IS_ERR_OR_NULL(acomp_ctx->acomp))
> >>                          crypto_free_acomp(acomp_ctx->acomp);
> >>                  kfree(acomp_ctx->buffer);
> >>          }
> >> +       mutex_unlock(&acomp_ctx->mutex);
> >>
> >>          return 0;
> >>   }
> >>
> >> +static struct crypto_acomp_ctx *acomp_ctx_get_cpu_locked(
> >> +               struct crypto_acomp_ctx __percpu *acomp_ctx)
> >> +{
> >> +       struct crypto_acomp_ctx *ctx;
> >> +
> >> +       for (;;) {
> >> +               ctx = raw_cpu_ptr(acomp_ctx);
> >> +               mutex_lock(&ctx->mutex);
> >
> > I'm a bit confused. IIUC, ctx is per-cpu right? What's protecting this
> > cpu-local data (including the mutex) from being invalidated under us
> > while we're sleeping and waiting for the mutex?

Please correct me if I am wrong, but my understanding is that memory
allocated with alloc_percpu() is allocated for each *possible* CPU,
and does not go away when CPUs are offlined. We allocate the per-CPU
crypto_acomp_ctx structs with alloc_percpu() (including the mutex), so
they should not go away with CPU offlining.

OTOH, we allocate the crypto_acomp_ctx.acompx, crypto_acomp_ctx.req,
and crypto_acomp_ctx.buffer only for online CPUs through the CPU
hotplug notifiers (i.e. zswap_cpu_comp_prepare() and
zswap_cpu_comp_dead()). These are the resources that can go away with
CPU offlining, and what we need to protect about.

The approach I am taking here is to hold the per-CPU mutex in the CPU
offlining code while we free these resources, and set
crypto_acomp_ctx.req to NULL. In acomp_ctx_get_cpu_locked(), we hold
the mutex of the current CPU, and check if crypto_acomp_ctx.req is
NULL.

If it is NULL, then the CPU is offlined between raw_cpu_ptr() and
acquiring the mutex, and we retry on the new CPU that we end up on. If
it is not NULL, then we are guaranteed that the resources will not be
freed by CPU offlining until acomp_ctx_put_unlock() is called and the
mutex is unlocked.

>
> Yeah, it's not safe, we can only use this_cpu_ptr(), which will disable
> preempt (so cpu offline can't kick in), and get refcount of ctx. Since
> we can't mutex_lock in the preempt disabled section.

My understanding is that the purpose of this_cpu_ptr() disabling
preemption is to prevent multiple CPUs accessing per-CPU data of a
single CPU concurrently. In the zswap case, we don't really need that
because we use the mutex to protect against it (and we cannot disable
preemption anyway).

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ