[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190520115251.364236232@linuxfoundation.org>
Date: Mon, 20 May 2019 14:13:33 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Ondrej Mosnacek <omosnace@...hat.com>,
Eric Biggers <ebiggers@...gle.com>,
Herbert Xu <herbert@...dor.apana.org.au>
Subject: [PATCH 5.1 026/128] crypto: lrw - dont access already-freed walk.iv
From: Eric Biggers <ebiggers@...gle.com>
commit aec286cd36eacfd797e3d5dab8d5d23c15d1bb5e upstream.
If the user-provided IV needs to be aligned to the algorithm's
alignmask, then skcipher_walk_virt() copies the IV into a new aligned
buffer walk.iv. But skcipher_walk_virt() can fail afterwards, and then
if the caller unconditionally accesses walk.iv, it's a use-after-free.
Fix this in the LRW template by checking the return value of
skcipher_walk_virt().
This bug was detected by my patches that improve testmgr to fuzz
algorithms against their generic implementation. When the extra
self-tests were run on a KASAN-enabled kernel, a KASAN use-after-free
splat occured during lrw(aes) testing.
Fixes: c778f96bf347 ("crypto: lrw - Optimize tweak computation")
Cc: <stable@...r.kernel.org> # v4.20+
Cc: Ondrej Mosnacek <omosnace@...hat.com>
Signed-off-by: Eric Biggers <ebiggers@...gle.com>
Signed-off-by: Herbert Xu <herbert@...dor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
crypto/lrw.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
--- a/crypto/lrw.c
+++ b/crypto/lrw.c
@@ -162,8 +162,10 @@ static int xor_tweak(struct skcipher_req
}
err = skcipher_walk_virt(&w, req, false);
- iv = (__be32 *)w.iv;
+ if (err)
+ return err;
+ iv = (__be32 *)w.iv;
counter[0] = be32_to_cpu(iv[3]);
counter[1] = be32_to_cpu(iv[2]);
counter[2] = be32_to_cpu(iv[1]);
Powered by blists - more mailing lists