[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <202308311538.2BD3826FD@keescook>
Date: Thu, 31 Aug 2023 15:39:08 -0700
From: Kees Cook <keescook@...omium.org>
To: Ard Biesheuvel <ardb@...nel.org>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
Eric Biggers <ebiggers@...nel.org>,
Herbert Xu <herbert@...dor.apana.org.au>,
Tony Luck <tony.luck@...el.com>,
"Guilherme G. Piccoli" <gpiccoli@...lia.com>,
linux-kernel@...r.kernel.org, linux-hardening@...r.kernel.org
Subject: Re: [PATCH v2] pstore: Base compression input buffer size on
estimated compressed size
On Thu, Aug 31, 2023 at 11:34:17PM +0200, Ard Biesheuvel wrote:
> On Thu, 31 Aug 2023 at 23:01, Kees Cook <keescook@...omium.org> wrote:
> >
> > From: Ard Biesheuvel <ardb@...nel.org>
> >
> > Commit 1756ddea6916 ("pstore: Remove worst-case compression size logic")
> > removed some clunky per-algorithm worst case size estimation routines on
> > the basis that we can always store pstore records uncompressed, and
> > these worst case estimations are about how much the size might
> > inadvertently *increase* due to encapsulation overhead when the input
> > cannot be compressed at all. So if compression results in a size
> > increase, we just store the original data instead.
> >
> > However, it seems that the original code was misinterpreting these
> > calculations as an estimation of how much uncompressed data might fit
> > into a compressed buffer of a given size, and it was using the results
> > to consume the input data in larger chunks than the pstore record size,
> > relying on the compression to ensure that what ultimately gets stored
> > fits into the available space.
> >
> > One result of this, as observed and reported by Linus, is that upgrading
> > to a newer kernel that includes the given commit may result in pstore
> > decompression errors reported in the kernel log. This is due to the fact
> > that the existing records may unexpectedly decompress to a size that is
> > larger than the pstore record size.
> >
> > Another potential problem caused by this change is that we may
> > underutilize the fixed sized records on pstore backends such as ramoops.
> > And on pstore backends with variable sized records such as EFI, we will
> > end up creating many more entries than before to store the same amount
> > of compressed data.
> >
> > So let's fix both issues, by bringing back the typical case estimation of
> > how much ASCII text captured from the dmesg log might fit into a pstore
> > record of a given size after compression. The original implementation
> > used the computation given below for zlib:
> >
> > switch (size) {
> > /* buffer range for efivars */
> > case 1000 ... 2000:
> > cmpr = 56;
> > break;
> > case 2001 ... 3000:
> > cmpr = 54;
> > break;
> > case 3001 ... 3999:
> > cmpr = 52;
> > break;
> > /* buffer range for nvram, erst */
> > case 4000 ... 10000:
> > cmpr = 45;
> > break;
> > default:
> > cmpr = 60;
> > break;
> > }
> >
> > return (size * 100) / cmpr;
> >
> > We will use the previous worst-case of 60% for compression. For
> > decompression go extra large (3x) so we make sure there's enough space
> > for anything.
> >
> > While at it, rate limit the error message so we don't flood the log
> > unnecessarily on systems that have accumulated a lot of pstore history.
> >
> > Cc: Linus Torvalds <torvalds@...ux-foundation.org>
> > Cc: Eric Biggers <ebiggers@...nel.org>
> > Cc: Kees Cook <keescook@...omium.org>
> > Cc: Herbert Xu <herbert@...dor.apana.org.au>
> > Signed-off-by: Ard Biesheuvel <ardb@...nel.org>
> > Link: https://lore.kernel.org/r/20230830212238.135900-1-ardb@kernel.org
> > Co-developed-by: Kees Cook <keescook@...omium.org>
> > Signed-off-by: Kees Cook <keescook@...omium.org>
> > ---
> > v2:
> > - reduce compression buffer size to 1.67x from 2x
> > - raise decompression buffer size to 3x
>
> LGTM
>
> Thanks for picking this up.
You bet! :) I've pushed it out, and if the bots don't yell at me I'll
send a PR to Linus tomorrow.
--
Kees Cook
Powered by blists - more mailing lists