lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAMj1kXE2kz1raOSy+ethim7YyFvKs+_uP2xhvndXDAbLdJDLdA@mail.gmail.com>
Date:   Thu, 31 Aug 2023 23:34:17 +0200
From:   Ard Biesheuvel <ardb@...nel.org>
To:     Kees Cook <keescook@...omium.org>
Cc:     Linus Torvalds <torvalds@...ux-foundation.org>,
        Eric Biggers <ebiggers@...nel.org>,
        Herbert Xu <herbert@...dor.apana.org.au>,
        Tony Luck <tony.luck@...el.com>,
        "Guilherme G. Piccoli" <gpiccoli@...lia.com>,
        linux-kernel@...r.kernel.org, linux-hardening@...r.kernel.org
Subject: Re: [PATCH v2] pstore: Base compression input buffer size on
 estimated compressed size

On Thu, 31 Aug 2023 at 23:01, Kees Cook <keescook@...omium.org> wrote:
>
> From: Ard Biesheuvel <ardb@...nel.org>
>
> Commit 1756ddea6916 ("pstore: Remove worst-case compression size logic")
> removed some clunky per-algorithm worst case size estimation routines on
> the basis that we can always store pstore records uncompressed, and
> these worst case estimations are about how much the size might
> inadvertently *increase* due to encapsulation overhead when the input
> cannot be compressed at all. So if compression results in a size
> increase, we just store the original data instead.
>
> However, it seems that the original code was misinterpreting these
> calculations as an estimation of how much uncompressed data might fit
> into a compressed buffer of a given size, and it was using the results
> to consume the input data in larger chunks than the pstore record size,
> relying on the compression to ensure that what ultimately gets stored
> fits into the available space.
>
> One result of this, as observed and reported by Linus, is that upgrading
> to a newer kernel that includes the given commit may result in pstore
> decompression errors reported in the kernel log. This is due to the fact
> that the existing records may unexpectedly decompress to a size that is
> larger than the pstore record size.
>
> Another potential problem caused by this change is that we may
> underutilize the fixed sized records on pstore backends such as ramoops.
> And on pstore backends with variable sized records such as EFI, we will
> end up creating many more entries than before to store the same amount
> of compressed data.
>
> So let's fix both issues, by bringing back the typical case estimation of
> how much ASCII text captured from the dmesg log might fit into a pstore
> record of a given size after compression. The original implementation
> used the computation given below for zlib:
>
>   switch (size) {
>   /* buffer range for efivars */
>   case 1000 ... 2000:
>         cmpr = 56;
>         break;
>   case 2001 ... 3000:
>         cmpr = 54;
>         break;
>   case 3001 ... 3999:
>         cmpr = 52;
>         break;
>   /* buffer range for nvram, erst */
>   case 4000 ... 10000:
>         cmpr = 45;
>         break;
>   default:
>         cmpr = 60;
>         break;
>   }
>
>   return (size * 100) / cmpr;
>
> We will use the previous worst-case of 60% for compression. For
> decompression go extra large (3x) so we make sure there's enough space
> for anything.
>
> While at it, rate limit the error message so we don't flood the log
> unnecessarily on systems that have accumulated a lot of pstore history.
>
> Cc: Linus Torvalds <torvalds@...ux-foundation.org>
> Cc: Eric Biggers <ebiggers@...nel.org>
> Cc: Kees Cook <keescook@...omium.org>
> Cc: Herbert Xu <herbert@...dor.apana.org.au>
> Signed-off-by: Ard Biesheuvel <ardb@...nel.org>
> Link: https://lore.kernel.org/r/20230830212238.135900-1-ardb@kernel.org
> Co-developed-by: Kees Cook <keescook@...omium.org>
> Signed-off-by: Kees Cook <keescook@...omium.org>
> ---
> v2:
>  - reduce compression buffer size to 1.67x from 2x
>  - raise decompression buffer size to 3x

LGTM

Thanks for picking this up.

> v1: https://lore.kernel.org/all/20230830212238.135900-1-ardb@kernel.org
> ---
>  fs/pstore/platform.c | 34 +++++++++++++++++++++++++++-------
>  1 file changed, 27 insertions(+), 7 deletions(-)
>
> diff --git a/fs/pstore/platform.c b/fs/pstore/platform.c
> index 62356d542ef6..e5bca9a004cc 100644
> --- a/fs/pstore/platform.c
> +++ b/fs/pstore/platform.c
> @@ -98,7 +98,14 @@ MODULE_PARM_DESC(kmsg_bytes, "amount of kernel log to snapshot (in bytes)");
>
>  static void *compress_workspace;
>
> +/*
> + * Compression is only used for dmesg output, which consists of low-entropy
> + * ASCII text, and so we can assume worst-case 60%.
> + */
> +#define DMESG_COMP_PERCENT     60
> +
>  static char *big_oops_buf;
> +static size_t max_compressed_size;
>
>  void pstore_set_kmsg_bytes(int bytes)
>  {
> @@ -196,6 +203,7 @@ static int pstore_compress(const void *in, void *out,
>
>  static void allocate_buf_for_compression(void)
>  {
> +       size_t compressed_size;
>         char *buf;
>
>         /* Skip if not built-in or compression disabled. */
> @@ -216,7 +224,8 @@ static void allocate_buf_for_compression(void)
>          * uncompressed record size, since any record that would be expanded by
>          * compression is just stored uncompressed.
>          */
> -       buf = kvzalloc(psinfo->bufsize, GFP_KERNEL);
> +       compressed_size = (psinfo->bufsize * 100) / DMESG_COMP_PERCENT;
> +       buf = kvzalloc(compressed_size, GFP_KERNEL);
>         if (!buf) {
>                 pr_err("Failed %zu byte compression buffer allocation for: %s\n",
>                        psinfo->bufsize, compress);
> @@ -233,6 +242,7 @@ static void allocate_buf_for_compression(void)
>
>         /* A non-NULL big_oops_buf indicates compression is available. */
>         big_oops_buf = buf;
> +       max_compressed_size = compressed_size;
>
>         pr_info("Using crash dump compression: %s\n", compress);
>  }
> @@ -246,6 +256,7 @@ static void free_buf_for_compression(void)
>
>         kvfree(big_oops_buf);
>         big_oops_buf = NULL;
> +       max_compressed_size = 0;
>  }
>
>  void pstore_record_init(struct pstore_record *record,
> @@ -305,7 +316,7 @@ static void pstore_dump(struct kmsg_dumper *dumper,
>                 record.buf = psinfo->buf;
>
>                 dst = big_oops_buf ?: psinfo->buf;
> -               dst_size = psinfo->bufsize;
> +               dst_size = max_compressed_size ?: psinfo->bufsize;
>
>                 /* Write dump header. */
>                 header_size = snprintf(dst, dst_size, "%s#%d Part%u\n", why,
> @@ -326,8 +337,15 @@ static void pstore_dump(struct kmsg_dumper *dumper,
>                                 record.compressed = true;
>                                 record.size = zipped_len;
>                         } else {
> -                               record.size = header_size + dump_size;
> -                               memcpy(psinfo->buf, dst, record.size);
> +                               /*
> +                                * Compression failed, so the buffer is most
> +                                * likely filled with binary data that does not
> +                                * compress as well as ASCII text. Copy as much
> +                                * of the uncompressed data as possible into
> +                                * the pstore record, and discard the rest.
> +                                */
> +                               record.size = psinfo->bufsize;
> +                               memcpy(psinfo->buf, dst, psinfo->bufsize);
>                         }
>                 } else {
>                         record.size = header_size + dump_size;
> @@ -560,6 +578,7 @@ static void decompress_record(struct pstore_record *record,
>         int ret;
>         int unzipped_len;
>         char *unzipped, *workspace;
> +       size_t max_uncompressed_size;
>
>         if (!IS_ENABLED(CONFIG_PSTORE_COMPRESS) || !record->compressed)
>                 return;
> @@ -583,7 +602,8 @@ static void decompress_record(struct pstore_record *record,
>         }
>
>         /* Allocate enough space to hold max decompression and ECC. */
> -       workspace = kvzalloc(psinfo->bufsize + record->ecc_notice_size,
> +       max_uncompressed_size = 3 * psinfo->bufsize;
> +       workspace = kvzalloc(max_uncompressed_size + record->ecc_notice_size,
>                              GFP_KERNEL);
>         if (!workspace)
>                 return;
> @@ -591,11 +611,11 @@ static void decompress_record(struct pstore_record *record,
>         zstream->next_in        = record->buf;
>         zstream->avail_in       = record->size;
>         zstream->next_out       = workspace;
> -       zstream->avail_out      = psinfo->bufsize;
> +       zstream->avail_out      = max_uncompressed_size;
>
>         ret = zlib_inflate(zstream, Z_FINISH);
>         if (ret != Z_STREAM_END) {
> -               pr_err("zlib_inflate() failed, ret = %d!\n", ret);
> +               pr_err_ratelimited("zlib_inflate() failed, ret = %d!\n", ret);
>                 kvfree(workspace);
>                 return;
>         }
> --
> 2.34.1
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ