[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f27efd2e-ac65-4f6a-b1b5-c9fb0753d871@bytedance.com>
Date: Wed, 27 Dec 2023 11:51:32 +0800
From: Chengming Zhou <zhouchengming@...edance.com>
To: Nhat Pham <nphamcs@...il.com>, Chris Li <chrisl@...nel.org>,
21cnbao@...il.com
Cc: syzbot <syzbot+3eff5e51bf1db122a16e@...kaller.appspotmail.com>,
akpm@...ux-foundation.org, davem@...emloft.net, herbert@...dor.apana.org.au,
linux-crypto@...r.kernel.org, linux-kernel@...r.kernel.org,
syzkaller-bugs@...glegroups.com, yosryahmed@...gle.com
Subject: Re: [syzbot] [crypto?] general protection fault in
scatterwalk_copychunks (5)
On 2023/12/27 08:23, Nhat Pham wrote:
> On Tue, Dec 26, 2023 at 3:30 PM Chris Li <chrisl@...nel.org> wrote:
>>
>> Again, sorry I was looking at the decompression side rather than the
>> compression side. The compression side does not even offer a safe
>> version of the compression function.
>> That seems to be dangerous. It seems for now we should make the zswap
>> roll back to 2 page buffer until we have a safe way to do compression
>> without overwriting the output buffers.
>
> Unfortunately, I think this is the way - at least until we rework the
> crypto/compression API (if that's even possible?).
> I still think the 2 page buffer is dumb, but it is what it is :(
Hi,
I think it's a bug in `scomp_acomp_comp_decomp()`, which doesn't use
the caller passed "src" and "dst" scatterlist. Instead, it uses its own
per-cpu "scomp_scratch", which have 128KB src and dst.
When compression done, it uses the output req->dlen to copy scomp_scratch->dst
to our dstmem, which has only one page now, so this problem happened.
I still don't know why the alg->compress(src, slen, dst, &dlen) doesn't
check the dlen? It seems an obvious bug, right?
As for this problem in `scomp_acomp_comp_decomp()`, this patch below
should fix it. I will set up a few tests to check later.
Thanks!
diff --git a/crypto/scompress.c b/crypto/scompress.c
index 442a82c9de7d..e654a120ae5a 100644
--- a/crypto/scompress.c
+++ b/crypto/scompress.c
@@ -117,6 +117,7 @@ static int scomp_acomp_comp_decomp(struct acomp_req *req, int dir)
struct crypto_scomp *scomp = *tfm_ctx;
void **ctx = acomp_request_ctx(req);
struct scomp_scratch *scratch;
+ unsigned int dlen;
int ret;
if (!req->src || !req->slen || req->slen > SCOMP_SCRATCH_SIZE)
@@ -128,6 +129,8 @@ static int scomp_acomp_comp_decomp(struct acomp_req *req, int dir)
if (!req->dlen || req->dlen > SCOMP_SCRATCH_SIZE)
req->dlen = SCOMP_SCRATCH_SIZE;
+ dlen = req->dlen;
+
scratch = raw_cpu_ptr(&scomp_scratch);
spin_lock(&scratch->lock);
@@ -145,6 +148,9 @@ static int scomp_acomp_comp_decomp(struct acomp_req *req, int dir)
ret = -ENOMEM;
goto out;
}
+ } else if (req->dlen > dlen) {
+ ret = -ENOMEM;
+ goto out;
}
scatterwalk_map_and_copy(scratch->dst, req->dst, 0, req->dlen,
1);
Powered by blists - more mailing lists