lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Thu,  2 Aug 2018 21:50:32 +0530
From:   Vakul Garg <vakul.garg@....com>
To:     netdev@...r.kernel.org
Cc:     borisp@...lanox.com, aviadye@...lanox.com, davejwatson@...com,
        davem@...emloft.net, Vakul Garg <vakul.garg@....com>
Subject: [PATCH net-next] net/tls: Always get number of sg entries for skb to be decrypted

Function decrypt_skb() made a bad assumption that number of sg entries
required for mapping skb to be decrypted would always be less than
MAX_SKB_FRAGS. The required count of sg entries for skb should always be
calculated. If they cannot fit in local array sgin_arr[], allocate them
from heap irrespective whether it is zero-copy case or otherwise. The
change also benefits the non-zero copy case as we could use sgin_arr[]
instead of always allocating sg entries from heap.

Signed-off-by: Vakul Garg <vakul.garg@....com>
---

The said problem has been discussed with Dave Watson over mail list.

 net/tls/tls_sw.c | 16 ++++++++++++----
 1 file changed, 12 insertions(+), 4 deletions(-)

diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
index ff3a6904a722..e2cf7aebb877 100644
--- a/net/tls/tls_sw.c
+++ b/net/tls/tls_sw.c
@@ -693,7 +693,7 @@ int decrypt_skb(struct sock *sk, struct sk_buff *skb,
 	struct scatterlist sgin_arr[MAX_SKB_FRAGS + 2];
 	struct scatterlist *sgin = &sgin_arr[0];
 	struct strp_msg *rxm = strp_msg(skb);
-	int ret, nsg = ARRAY_SIZE(sgin_arr);
+	int ret, nsg;
 	struct sk_buff *unused;
 
 	ret = skb_copy_bits(skb, rxm->offset + TLS_HEADER_SIZE,
@@ -703,12 +703,20 @@ int decrypt_skb(struct sock *sk, struct sk_buff *skb,
 		return ret;
 
 	memcpy(iv, tls_ctx->rx.iv, TLS_CIPHER_AES_GCM_128_SALT_SIZE);
-	if (!sgout) {
-		nsg = skb_cow_data(skb, 0, &unused) + 1;
+
+	/* If required number of SG entries for skb are more than
+	 * sgin_arr elements, then dynamically allocate sg table.
+	 */
+	nsg = skb_cow_data(skb, 0, &unused) + 1;
+	if (nsg > ARRAY_SIZE(sgin_arr)) {
 		sgin = kmalloc_array(nsg, sizeof(*sgin), sk->sk_allocation);
-		sgout = sgin;
+		if (!sgin)
+			return -ENOMEM;
 	}
 
+	if (!sgout)
+		sgout = sgin;
+
 	sg_init_table(sgin, nsg);
 	sg_set_buf(&sgin[0], ctx->rx_aad_ciphertext, TLS_AAD_SPACE_SIZE);
 
-- 
2.13.6

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ