lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Fri, 22 Mar 2019 12:14:53 +0100 From: Greg Kroah-Hartman <gregkh@...uxfoundation.org> To: linux-kernel@...r.kernel.org Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>, stable@...r.kernel.org, Eric Biggers <ebiggers@...gle.com>, Herbert Xu <herbert@...dor.apana.org.au> Subject: [PATCH 4.4 155/230] crypto: ahash - fix another early termination in hash walk 4.4-stable review patch. If anyone has any objections, please let me know. ------------------ From: Eric Biggers <ebiggers@...gle.com> commit 77568e535af7c4f97eaef1e555bf0af83772456c upstream. Hash algorithms with an alignmask set, e.g. "xcbc(aes-aesni)" and "michael_mic", fail the improved hash tests because they sometimes produce the wrong digest. The bug is that in the case where a scatterlist element crosses pages, not all the data is actually hashed because the scatterlist walk terminates too early. This happens because the 'nbytes' variable in crypto_hash_walk_done() is assigned the number of bytes remaining in the page, then later interpreted as the number of bytes remaining in the scatterlist element. Fix it. Fixes: 900a081f6912 ("crypto: ahash - Fix early termination in hash walk") Cc: stable@...r.kernel.org Signed-off-by: Eric Biggers <ebiggers@...gle.com> Signed-off-by: Herbert Xu <herbert@...dor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org> --- crypto/ahash.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) --- a/crypto/ahash.c +++ b/crypto/ahash.c @@ -85,17 +85,17 @@ static int hash_walk_new_entry(struct cr int crypto_hash_walk_done(struct crypto_hash_walk *walk, int err) { unsigned int alignmask = walk->alignmask; - unsigned int nbytes = walk->entrylen; walk->data -= walk->offset; - if (nbytes && walk->offset & alignmask && !err) { - walk->offset = ALIGN(walk->offset, alignmask + 1); - nbytes = min(nbytes, - ((unsigned int)(PAGE_SIZE)) - walk->offset); - walk->entrylen -= nbytes; + if (walk->entrylen && (walk->offset & alignmask) && !err) { + unsigned int nbytes; + walk->offset = ALIGN(walk->offset, alignmask + 1); + nbytes = min(walk->entrylen, + (unsigned int)(PAGE_SIZE - walk->offset)); if (nbytes) { + walk->entrylen -= nbytes; walk->data += walk->offset; return nbytes; } @@ -115,7 +115,7 @@ int crypto_hash_walk_done(struct crypto_ if (err) return err; - if (nbytes) { + if (walk->entrylen) { walk->offset = 0; walk->pg++; return hash_walk_next(walk);
Powered by blists - more mailing lists