lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200215061416.GZ23230@ZenIV.linux.org.uk>
Date:   Sat, 15 Feb 2020 06:14:16 +0000
From:   Al Viro <viro@...iv.linux.org.uk>
To:     Atul Gupta <atul.gupta@...lsio.com>
Cc:     linux-crypto@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: [RFC][PATCH] almost certain bug in
 drivers/crypto/chelsio/chcr_algo.c:create_authenc_wr()

        kctx_len = (ntohl(KEY_CONTEXT_CTX_LEN_V(aeadctx->key_ctx_hdr)) << 4)
                - sizeof(chcr_req->key_ctx);
can't possibly be endian-safe.  Look: ->key_ctx_hdr is __be32.  And
KEY_CONTEXT_CTX_LEN_V is "shift up by 24 bits".  On little-endian hosts it
sees
	b0 b1 b2 b3
in memory, inteprets that into b0 + (b1 << 8) + (b2 << 16) + (b3 << 24),
shifts up by 24, resulting in b0 << 24, does ntohl (byteswap on l-e),
gets b0 and shifts that up by 4.  So we get b0 * 16 - sizeof(...).

Sounds reasonable, but on b-e we get
b3 + (b2 << 8) + (b1 << 16) + (b0 << 24), shift up by 24,
yielding b3 << 24, do ntohl (no-op on b-e) and then shift up by 4.
Resulting in b3 << 28 - sizeof(...), i.e. slightly under b3 * 256M.

Then we increase it some more and pass to alloc_skb() as size.
Somehow I doubt that we really want a quarter-gigabyte skb allocation
here...

Note that when you are building those values in
#define  FILL_KEY_CTX_HDR(ck_size, mk_size, d_ck, opad, ctx_len) \
                htonl(KEY_CONTEXT_VALID_V(1) | \
                      KEY_CONTEXT_CK_SIZE_V((ck_size)) | \
                      KEY_CONTEXT_MK_SIZE_V(mk_size) | \
                      KEY_CONTEXT_DUAL_CK_V((d_ck)) | \
                      KEY_CONTEXT_OPAD_PRESENT_V((opad)) | \
                      KEY_CONTEXT_SALT_PRESENT_V(1) | \
                      KEY_CONTEXT_CTX_LEN_V((ctx_len)))
ctx_len ends up in the first octet (i.e. b0 in the above), which
matches the current behaviour on l-e.  If that's the intent, this
thing should've been
        kctx_len = (KEY_CONTEXT_CTX_LEN_G(ntohl(aeadctx->key_ctx_hdr)) << 4)
                - sizeof(chcr_req->key_ctx);
instead - fetch after ntohl() we get (b0 << 24) + (b1 << 16) + (b2 << 8) + b3,
shift it down by 24 (b0), resuling in b0 * 16 - sizeof(...) both on l-e and
on b-e.

PS: when sparse warns you about endianness problems, it might be worth checking
if there really is something wrong.  And I don't mean "slap __force cast on it"...

Signed-off-by: Al Viro <viro@...iv.linux.org.uk>
---
diff -urN a/drivers/crypto/chelsio/chcr_algo.c b/drivers/crypto/chelsio/chcr_algo.c
--- a/drivers/crypto/chelsio/chcr_algo.c
+++ b/drivers/crypto/chelsio/chcr_algo.c
@@ -2351,7 +2351,7 @@ static struct sk_buff *create_authenc_wr(struct aead_request *req,
 	snents = sg_nents_xlen(req->src, req->assoclen + req->cryptlen,
 			       CHCR_SRC_SG_SIZE, 0);
 	dst_size = get_space_for_phys_dsgl(dnents);
-	kctx_len = (ntohl(KEY_CONTEXT_CTX_LEN_V(aeadctx->key_ctx_hdr)) << 4)
+	kctx_len = (KEY_CONTEXT_CTX_LEN_G(ntohl(aeadctx->key_ctx_hdr)) << 4)
 		- sizeof(chcr_req->key_ctx);
 	transhdr_len = CIPHER_TRANSHDR_SIZE(kctx_len, dst_size);
 	reqctx->imm = (transhdr_len + req->assoclen + req->cryptlen) <

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ