lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJD7tkYpNNsbTZZqFoRh-FkXDgxONZEUPKk1YQv7-TFMWWQRzQ@mail.gmail.com>
Date: Tue, 7 Jan 2025 18:33:30 -0800
From: Yosry Ahmed <yosryahmed@...gle.com>
To: Johannes Weiner <hannes@...xchg.org>, Nhat Pham <nphamcs@...il.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>, Chengming Zhou <chengming.zhou@...ux.dev>, 
	Vitaly Wool <vitalywool@...il.com>, Barry Song <baohua@...nel.org>, 
	Sam Sun <samsun1006219@...il.com>, "linux-mm@...ck.org" <linux-mm@...ck.org>, 
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>, 
	"stable@...r.kernel.org" <stable@...r.kernel.org>, 
	"Sridhar, Kanchana P" <kanchana.p.sridhar@...el.com>
Subject: Re: [PATCH v2 2/2] mm: zswap: disable migration while using per-CPU acomp_ctx

On Tue, Jan 7, 2025 at 5:18 PM Yosry Ahmed <yosryahmed@...gle.com> wrote:
>
> [..]
> > > > Couple of possibly related thoughts:
> > > >
> > > > 1) I have been thinking some more about the purpose of this per-cpu
> > > acomp_ctx
> > > >      mutex. It appears that the main benefit is it causes task blocked errors
> > > (which are
> > > >      useful to detect problems) if any computes in the code section it covers
> > > take a
> > > >      long duration. Other than that, it does not protect a resource, nor
> > > prevent
> > > >      cpu offlining from deleting its containing structure.
> > >
> > > It does protect resources. Consider this case:
> > > - Process A runs on CPU #1, gets the acomp_ctx, and locks it, then is
> > > migrated to CPU #2.
> > > - Process B runs on CPU #1, gets the same acomp_ctx, and tries to lock
> > > it then waits for process A. Without the lock they would be using the
> > > same lock.
> > >
> > > It is also possible that process A simply gets preempted while running
> > > on CPU #1 by another task that also tries to use the acomp_ctx. The
> > > mutex also protects against this case.
> >
> > Got it, thanks for the explanations. It seems with this patch, the mutex
> > would be redundant in the first example. Would this also be true of the
> > second example where process A gets preempted?
>
> I think the mutex is still required in the second example. Migration
> being disabled does not prevent other processes from running on the
> same CPU and attempting to use the same acomp_ctx.
>
> > If not, is it worth
> > figuring out a solution that works for both migration and preemption?
>
> Not sure exactly what you mean here. I suspect you mean have a single
> mechanism to protect against concurrent usage and CPU hotunplug rather
> than disabling migration and having a mutex. Yeah that would be ideal,
> but probably not for a hotfix.

Actually, using the mutex to protect against CPU hotunplug is not too
complicated. The following diff is one way to do it (lightly tested).
Johannes, Nhat, any preferences between this patch (disabling
migration) and the following diff?

diff --git a/mm/zswap.c b/mm/zswap.c
index f6316b66fb236..4d6817c679a54 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -869,17 +869,40 @@ static int zswap_cpu_comp_dead(unsigned int cpu,
struct hlist_node *node)
        struct zswap_pool *pool = hlist_entry(node, struct zswap_pool, node);
        struct crypto_acomp_ctx *acomp_ctx = per_cpu_ptr(pool->acomp_ctx, cpu);

+       mutex_lock(&acomp_ctx->mutex);
        if (!IS_ERR_OR_NULL(acomp_ctx)) {
                if (!IS_ERR_OR_NULL(acomp_ctx->req))
                        acomp_request_free(acomp_ctx->req);
+               acomp_ctx->req = NULL;
                if (!IS_ERR_OR_NULL(acomp_ctx->acomp))
                        crypto_free_acomp(acomp_ctx->acomp);
                kfree(acomp_ctx->buffer);
        }
+       mutex_unlock(&acomp_ctx->mutex);

        return 0;
 }

+static struct crypto_acomp_ctx *acomp_ctx_get_cpu_locked(
+               struct crypto_acomp_ctx __percpu *acomp_ctx)
+{
+       struct crypto_acomp_ctx *ctx;
+
+       for (;;) {
+               ctx = raw_cpu_ptr(acomp_ctx);
+               mutex_lock(&ctx->mutex);
+               if (likely(ctx->req))
+                       return ctx;
+               /* Raced with zswap_cpu_comp_dead() on CPU hotunplug */
+               mutex_unlock(&ctx->mutex);
+       }
+}
+
+static void acomp_ctx_put_unlock(struct crypto_acomp_ctx *ctx)
+{
+       mutex_unlock(&ctx->mutex);
+}
+
 static bool zswap_compress(struct page *page, struct zswap_entry *entry,
                           struct zswap_pool *pool)
 {
@@ -893,10 +916,7 @@ static bool zswap_compress(struct page *page,
struct zswap_entry *entry,
        gfp_t gfp;
        u8 *dst;

-       acomp_ctx = raw_cpu_ptr(pool->acomp_ctx);
-
-       mutex_lock(&acomp_ctx->mutex);
-
+       acomp_ctx = acomp_ctx_get_cpu_locked(pool->acomp_ctx);
        dst = acomp_ctx->buffer;
        sg_init_table(&input, 1);
        sg_set_page(&input, page, PAGE_SIZE, 0);
@@ -949,7 +969,7 @@ static bool zswap_compress(struct page *page,
struct zswap_entry *entry,
        else if (alloc_ret)
                zswap_reject_alloc_fail++;

-       mutex_unlock(&acomp_ctx->mutex);
+       acomp_ctx_put_unlock(acomp_ctx);
        return comp_ret == 0 && alloc_ret == 0;
 }

@@ -960,9 +980,7 @@ static void zswap_decompress(struct zswap_entry
*entry, struct folio *folio)
        struct crypto_acomp_ctx *acomp_ctx;
        u8 *src;

-       acomp_ctx = raw_cpu_ptr(entry->pool->acomp_ctx);
-       mutex_lock(&acomp_ctx->mutex);
-
+       acomp_ctx = acomp_ctx_get_cpu_locked(entry->pool->acomp_ctx);
        src = zpool_map_handle(zpool, entry->handle, ZPOOL_MM_RO);
        /*

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ