lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKEwX=O=zcNBHaxqj34mZ0Q0OENsbdAOVR3ySW3v-mr--AjScw@mail.gmail.com>
Date: Wed, 26 Feb 2025 15:16:39 -0800
From: Nhat Pham <nphamcs@...il.com>
To: Yosry Ahmed <yosry.ahmed@...ux.dev>
Cc: akpm@...ux-foundation.org, hannes@...xchg.org, chengming.zhou@...ux.dev, 
	linux-mm@...ck.org, kernel-team@...a.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] zswap: do not crash the kernel on decompression failure

On Tue, Feb 25, 2025 at 6:40 PM Yosry Ahmed <yosry.ahmed@...ux.dev> wrote:
>
> On Tue, Feb 25, 2025 at 01:32:00PM -0800, Nhat Pham wrote:
> > Currently, we crash the kernel when a decompression failure occurs in
> > zswap (either because of memory corruption, or a bug in the compression
> > algorithm). This is overkill. We should only SIGBUS the unfortunate
> > process asking for the zswap entry on zswap load, and skip the corrupted
> > entry in zswap writeback.
>
> It's probably worth adding more details here. For example, how the
> SIGBUS is delivered (it's not super clear in the code), and the
> distinction between handling decompression failures for loads and
> writeback.

Will do :)

>
> >
> > See [1] for a recent upstream discussion about this.
> >
> > [1]: https://lore.kernel.org/all/ZsiLElTykamcYZ6J@casper.infradead.org/
> >
> > Suggested-by: Matthew Wilcox <willy@...radead.org>
> > Suggested-by: Yosry Ahmed <yosryahmed@...gle.com>
>
> It's Yosry Ahmed <yosry.ahmed@...ux.dev> now :P

Oops, will fix :P

>
> > Signed-off-by: Nhat Pham <nphamcs@...il.com>
> > ---
> >  mm/zswap.c | 85 +++++++++++++++++++++++++++++++++++++-----------------
> >  1 file changed, 58 insertions(+), 27 deletions(-)
> >
> > diff --git a/mm/zswap.c b/mm/zswap.c
> > index f6316b66fb23..31d4397eed61 100644
> > --- a/mm/zswap.c
> > +++ b/mm/zswap.c
> > @@ -62,6 +62,8 @@ static u64 zswap_reject_reclaim_fail;
> >  static u64 zswap_reject_compress_fail;
> >  /* Compressed page was too big for the allocator to (optimally) store */
> >  static u64 zswap_reject_compress_poor;
> > +/* Load and writeback failed due to decompression failure */
>
> Nit: Load or writeback?

Consider it done :)

>
> > +static u64 zswap_reject_decompress_fail;
> >  /* Store failed because underlying allocator could not get memory */
> >  static u64 zswap_reject_alloc_fail;
> >  /* Store failed because the entry metadata could not be allocated (rare) */
> > @@ -953,11 +955,12 @@ static bool zswap_compress(struct page *page, struct zswap_entry *entry,
> >       return comp_ret == 0 && alloc_ret == 0;
> >  }
> >
> > -static void zswap_decompress(struct zswap_entry *entry, struct folio *folio)
> > +static bool zswap_decompress(struct zswap_entry *entry, struct folio *folio)
> >  {
> >       struct zpool *zpool = entry->pool->zpool;
> >       struct scatterlist input, output;
> >       struct crypto_acomp_ctx *acomp_ctx;
> > +     bool ret = true;
> >       u8 *src;
> >
> >       acomp_ctx = raw_cpu_ptr(entry->pool->acomp_ctx);
> > @@ -984,12 +987,19 @@ static void zswap_decompress(struct zswap_entry *entry, struct folio *folio)
> >       sg_init_table(&output, 1);
> >       sg_set_folio(&output, folio, PAGE_SIZE, 0);
> >       acomp_request_set_params(acomp_ctx->req, &input, &output, entry->length, PAGE_SIZE);
> > -     BUG_ON(crypto_wait_req(crypto_acomp_decompress(acomp_ctx->req), &acomp_ctx->wait));
> > -     BUG_ON(acomp_ctx->req->dlen != PAGE_SIZE);
> > +     if (crypto_wait_req(crypto_acomp_decompress(acomp_ctx->req), &acomp_ctx->wait) ||
> > +                     acomp_ctx->req->dlen != PAGE_SIZE) {
> > +             ret = false;
> > +             zswap_reject_decompress_fail++;
> > +             pr_alert_ratelimited(
> > +                     "decompression failed on zswap entry with offset %08lx\n",
> > +                     entry->swpentry.val);
> > +     }
>
> This can probably be prettier if we save the return value of
> crypto_wait_req() in a variable, and then do the check at the end of the
> function:
>
> zswap_decompress()
> {
>         mutex_lock();
>         ...
>         ret = crypto_wait_req(..);
>         ...
>         mutex_unlock();
>         ...
>         if (ret || acomp_ctx->req->dlen != PAGE_SIZE) {

Hmmmm, do we need the mutex to protect acomp_ctx->req->dlen? No strong
opinions here from my end TBH.

>                 /* SHIT */
>                 return false;
>         }
>         return true;
> }
[...]
>
> We are doing load + erase here and in the writeback path now, so two
> xarray walks instead of one. How does this affect performance? We had a
> similar about the possiblity of doing a lockless xas_load() followed by
> a conditional xas_erase() for zswap_invalidate():
>
> https://lore.kernel.org/lkml/20241018192525.95862-1-ryncsn@gmail.com/
>
> Unfortunately it seems like we can't trivially do that unless we keep
> the tree locked, which we probably don't want to do throughout
> decompression.
>
> How crazy would it be to remove the entry from the tree and re-add it if
> compression fails? Does swapcache_prepare() provide sufficient
> protection for us to do this without anyone else looking at this entry
> (seems like it)?

My concern is, what happens if xa_store() in the re-add path fails
because we do not have enough memory?

>
> Anyway, this is all moot if the second walk is not noticeable from a
> perf perspective.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ