[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250226130037.GS5777@twin.jikos.cz>
Date: Wed, 26 Feb 2025 14:00:37 +0100
From: David Sterba <dsterba@...e.cz>
To: Herbert Xu <herbert@...dor.apana.org.au>
Cc: Linux Crypto Mailing List <linux-crypto@...r.kernel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Nitin Gupta <nitingupta910@...il.com>,
Richard Purdie <rpurdie@...nedhand.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Sergey Senozhatsky <senozhatsky@...omium.org>,
"Markus F.X.J. Oberhumer" <markus@...rhumer.com>,
Dave Rodgman <dave.rodgman@....com>
Subject: Re: [PATCH] lib/lzo: Avoid output overruns when compressing
On Sun, Feb 23, 2025 at 02:55:24PM +0800, Herbert Xu wrote:
> The compression code in LZO never checked for output overruns.
> Fix this by checking for end of buffer before each write.
Does it have to check for the overruns? The worst case compression
result size is known and can be calculated by the formula. Using big
enough buffer is part of the correct usage of LZO. All in-kernel users
of lzo1x_1_compress() seem to provide the target buffer calculated by
lzo1x_worst_compress(). F2FS, JFFS2, BTRFS. Not sure about ZRAM.
What strikes me as alarming that you insert about 20 branches into a
realtime compression algorithm, where everything is basically a hot
path. Branches that almost never happen, and never if the output buffer
is big enough.
Please drop the patch.
Powered by blists - more mailing lists