[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1238759294.9692.49.camel@nigel-laptop>
Date: Fri, 03 Apr 2009 22:48:14 +1100
From: Nigel Cunningham <ncunningham@...a.org.au>
To: Andreas Robinson <andr345@...il.com>
Cc: Arjan van de Ven <arjan@...ux.intel.com>,
"H. Peter Anvin" <hpa@...or.com>, Alain Knaff <alain@...ff.lu>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/2] lib: add fast lzo decompressor
Hi.
On Fri, 2009-04-03 at 12:54 +0200, Andreas Robinson wrote:
> The LZO compressor can produce more bytes than it consumes but here the
> output buffer is the same size as the input.
> This macro in linux/lzo.h defines how big the buffer needs to be:
> #define lzo1x_worst_compress(x) ((x) + ((x) / 16) + 64 + 3)
Okay. Am I right in thinking (from staring at the code) that the
compression algo just assumes it has an output buffer big enough? (I
don't see it checking out_len, only writing to it). If that's the case,
I guess I need to (ideally) persuade the cryptoapi guys to extend the
api so you can find out how big an output buffer is needed for a
particular compression algorithm - or learn how they've already done
that (though it doesn't look like it to me).
> If there are multiple threads perhaps they clobber each other's output
> buffers?
Nope. The output buffers you see here are fed to the next part of the
pipeline (the block I/O code), which combines them (under a mutex) into
a stream of |index|size|data|index|size|data... so that we don't have to
worry at all about which processor compressed (or decompresses data
later). As I said earlier, it's worked fine with LZF - or no compression
- for years. It's just LZO that causes me problems.
Thanks!
Nigel
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists