lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAF8kJuPF5ACu8o1P7GqEQRb6p8QShyTVNuzrrY557g+SsddzWA@mail.gmail.com>
Date: Tue, 26 Dec 2023 15:29:54 -0800
From: Chris Li <chrisl@...nel.org>
To: Nhat Pham <nphamcs@...il.com>
Cc: syzbot <syzbot+3eff5e51bf1db122a16e@...kaller.appspotmail.com>, 
	akpm@...ux-foundation.org, davem@...emloft.net, herbert@...dor.apana.org.au, 
	linux-crypto@...r.kernel.org, linux-kernel@...r.kernel.org, 
	syzkaller-bugs@...glegroups.com, yosryahmed@...gle.com, 
	zhouchengming@...edance.com
Subject: Re: [syzbot] [crypto?] general protection fault in
 scatterwalk_copychunks (5)

Hi Nhat,

On Tue, Dec 26, 2023 at 3:11 PM Nhat Pham <nphamcs@...il.com> wrote:

> > The decompressed output can be bigger than input. However, here we
> > specify the output size limit to be one page. The decompressor should
> > not output more than the 1 page of the dst buffer.
> >
> > I did check the  lzo1x_decompress_safe() function.
>
> I think you meant lzo1x_1_do_compress() right? This error happens on
> the zswap_store() path, so it is a compression bug.

Ah, my bad. I don't know why I am looking at the decompression rather
than compression.
Thanks for catching that.

>
> > It is supposed to use the  NEED_OP(x) check before it stores the output buffer.
>
> I can't seem to find any check in compression code. But that function
> is 300 LoC, with no comment :)

It seems the compression side does not have a safe version of the
function which respects the output buffer size. I agree with you there
seems to be no check on the output buffer size before writing to the
output buffer. The "size_t *out_len" seems to work as an output only
pointer. It does not respect the output size limit.

The only place use the output_len is at lzo1x_compress.c:298
*out_len = op - out;

So confirm it is output only :-(

>
> > However I do find one place that seems to be missing that check, at
> > least it is not obvious to me how that check is effective. I will
> > paste it here let other experts take a look as well:
> > Line 228:
> >
> > if (op - m_pos >= 8) {
> > unsigned char *oe = op + t;
> > if (likely(HAVE_OP(t + 15))) {
> > do {
> > COPY8(op, m_pos);
> > op += 8;
> > m_pos += 8;
> > COPY8(op, m_pos);
> > op += 8;
> > m_pos += 8;
> > } while (op < oe);
> > op = oe;
> > if (HAVE_IP(6)) {
> > state = next;
> > COPY4(op, ip); <========================= This COPY4 does not have
> > obvious NEED_OP() check. As far as I can tell, this output is not
> > converted by the above HAVE_OP(t + 15)) check, which means to protect
> > the unaligned two 8 byte stores inside the "do while" loops.
> > op += next;
> > ip += next;
> > continue;
> > }
> > } else {
> > NEED_OP(t);
> > do {
> > *op++ = *m_pos++;
> > } while (op < oe);
> > }
> > } else
> >
> >
> > >
> > > Not 100% sure about linux kernel's implementation though. I'm no
> > > compression expert :)
> >
> > Me neither. Anyway, if it is a decompression issue. For this bug, it
> > is good to have some debug print assert to check that after
> > decompression, the *dlen is not bigger than the original output
> > length. If it does blow over the decompression buffer, it is a bug
> > that needs to be fixed in the decompression function side, not the
> > zswap code.
>
> But regardless, I agree. We should enforce the condition that the
> output should not exceed the given buffer's size, and gracefully fails
> if it does (i.e returns an interpretable error code as opposed to just
> walking off the cliff like this).

Again, sorry I was looking at the decompression side rather than the
compression side. The compression side does not even offer a safe
version of the compression function.
That seems to be dangerous. It seems for now we should make the zswap
roll back to 2 page buffer until we have a safe way to do compression
without overwriting the output buffers.

>
> I imagine that many compression use-cases are best-effort
> optimization, i.e it's fine to fail. zswap is exactly this - do your
> best to compress the data, but if it's too incompressible, failure is
> acceptable (we have plenty of fallback options - swapping, keeping the
> page in memory, etc.).
>
> Furthermore, this is a bit of a leaky abstraction - currently there's
> no contract on how much bigger can the "compressed" output be with
> respect to the input. It's compression algorithm-dependent, which
> defeats the point of the abstraction. In zswap case, 2 * PAGE_SIZE is
> just a rule of thumb - now imagine we use a compression algorithm that
> behaves super well in most of the cases, but would output 3 *
> PAGE_SIZE in some edge cases. This will still break the code. Better
> to give the caller the ability to bound the output size, and
> gracefully fail if that bound cannot be respected for a particular
> input.

Agree.

Chris

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ