[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z7_D4i5yifwdXjwZ@gondor.apana.org.au>
Date: Thu, 27 Feb 2025 09:46:10 +0800
From: Herbert Xu <herbert@...dor.apana.org.au>
To: David Sterba <dsterba@...e.cz>
Cc: Linux Crypto Mailing List <linux-crypto@...r.kernel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Nitin Gupta <nitingupta910@...il.com>,
Richard Purdie <rpurdie@...nedhand.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Sergey Senozhatsky <senozhatsky@...omium.org>,
"Markus F.X.J. Oberhumer" <markus@...rhumer.com>,
Dave Rodgman <dave.rodgman@....com>
Subject: Re: [PATCH] lib/lzo: Avoid output overruns when compressing
On Wed, Feb 26, 2025 at 02:00:37PM +0100, David Sterba wrote:
>
> Does it have to check for the overruns? The worst case compression
> result size is known and can be calculated by the formula. Using big
If the caller is using different algorithms, then yes the checks
are essential. Otherwise the caller would have to allocate enough
memory not just for LZO, but for the worst-case compression length
for *any* algorithm. Adding a single algorithm would have the
potential of breaking all users.
> What strikes me as alarming that you insert about 20 branches into a
> realtime compression algorithm, where everything is basically a hot
> path. Branches that almost never happen, and never if the output buffer
> is big enough.
OK, if that is a real concern then I will add a _safe version of
LZO compression alongside the existing code.
Cheers,
--
Email: Herbert Xu <herbert@...dor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
Powered by blists - more mailing lists