[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250227031607.GY5777@suse.cz>
Date: Thu, 27 Feb 2025 04:16:07 +0100
From: David Sterba <dsterba@...e.cz>
To: Herbert Xu <herbert@...dor.apana.org.au>
Cc: Linux Crypto Mailing List <linux-crypto@...r.kernel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Nitin Gupta <nitingupta910@...il.com>,
Richard Purdie <rpurdie@...nedhand.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Sergey Senozhatsky <senozhatsky@...omium.org>,
"Markus F.X.J. Oberhumer" <markus@...rhumer.com>,
Dave Rodgman <dave.rodgman@....com>
Subject: Re: [PATCH] lib/lzo: Avoid output overruns when compressing
On Thu, Feb 27, 2025 at 09:46:10AM +0800, Herbert Xu wrote:
> On Wed, Feb 26, 2025 at 02:00:37PM +0100, David Sterba wrote:
> >
> > Does it have to check for the overruns? The worst case compression
> > result size is known and can be calculated by the formula. Using big
>
> If the caller is using different algorithms, then yes the checks
> are essential. Otherwise the caller would have to allocate enough
> memory not just for LZO, but for the worst-case compression length
> for *any* algorithm. Adding a single algorithm would have the
> potential of breaking all users.
>
> > What strikes me as alarming that you insert about 20 branches into a
> > realtime compression algorithm, where everything is basically a hot
> > path. Branches that almost never happen, and never if the output buffer
> > is big enough.
>
> OK, if that is a real concern then I will add a _safe version of
> LZO compression alongside the existing code.
Makes sense, thanks. The in-kernel users are OK, but the crypto API also
exports the compression so there's no guarantee it's used correctly. As
it needs changes to the LZO code itself I don't see a better way than to
have 2 versions, conveniently done by the macros as yo did.
Powered by blists - more mailing lists