[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190504102452.152806399@linuxfoundation.org>
Date: Sat, 4 May 2019 12:25:21 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org,
Jakub Kicinski <jakub.kicinski@...ronome.com>,
John Hurley <john.hurley@...ronome.com>,
"David S. Miller" <davem@...emloft.net>
Subject: [PATCH 4.19 19/23] net/tls: fix copy to fragments in reencrypt
From: Jakub Kicinski <jakub.kicinski@...ronome.com>
[ Upstream commit eb3d38d5adb520435d4e4af32529ccb13ccc9935 ]
Fragments may contain data from other records so we have to account
for that when we calculate the destination and max length of copy we
can perform. Note that 'offset' is the offset within the message,
so it can't be passed as offset within the frag..
Here skb_store_bits() would have realised the call is wrong and
simply not copy data.
Fixes: 4799ac81e52a ("tls: Add rx inline crypto offload")
Signed-off-by: Jakub Kicinski <jakub.kicinski@...ronome.com>
Reviewed-by: John Hurley <john.hurley@...ronome.com>
Signed-off-by: David S. Miller <davem@...emloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
net/tls/tls_device.c | 29 ++++++++++++++++++++++-------
1 file changed, 22 insertions(+), 7 deletions(-)
--- a/net/tls/tls_device.c
+++ b/net/tls/tls_device.c
@@ -569,7 +569,7 @@ void handle_device_resync(struct sock *s
static int tls_device_reencrypt(struct sock *sk, struct sk_buff *skb)
{
struct strp_msg *rxm = strp_msg(skb);
- int err = 0, offset = rxm->offset, copy, nsg;
+ int err = 0, offset = rxm->offset, copy, nsg, data_len, pos;
struct sk_buff *skb_iter, *unused;
struct scatterlist sg[1];
char *orig_buf, *buf;
@@ -600,9 +600,10 @@ static int tls_device_reencrypt(struct s
else
err = 0;
+ data_len = rxm->full_len - TLS_CIPHER_AES_GCM_128_TAG_SIZE;
+
if (skb_pagelen(skb) > offset) {
- copy = min_t(int, skb_pagelen(skb) - offset,
- rxm->full_len - TLS_CIPHER_AES_GCM_128_TAG_SIZE);
+ copy = min_t(int, skb_pagelen(skb) - offset, data_len);
if (skb->decrypted)
skb_store_bits(skb, offset, buf, copy);
@@ -611,16 +612,30 @@ static int tls_device_reencrypt(struct s
buf += copy;
}
+ pos = skb_pagelen(skb);
skb_walk_frags(skb, skb_iter) {
- copy = min_t(int, skb_iter->len,
- rxm->full_len - offset + rxm->offset -
- TLS_CIPHER_AES_GCM_128_TAG_SIZE);
+ int frag_pos;
+
+ /* Practically all frags must belong to msg if reencrypt
+ * is needed with current strparser and coalescing logic,
+ * but strparser may "get optimized", so let's be safe.
+ */
+ if (pos + skb_iter->len <= offset)
+ goto done_with_frag;
+ if (pos >= data_len + rxm->offset)
+ break;
+
+ frag_pos = offset - pos;
+ copy = min_t(int, skb_iter->len - frag_pos,
+ data_len + rxm->offset - offset);
if (skb_iter->decrypted)
- skb_store_bits(skb_iter, offset, buf, copy);
+ skb_store_bits(skb_iter, frag_pos, buf, copy);
offset += copy;
buf += copy;
+done_with_frag:
+ pos += skb_iter->len;
}
free_buf:
Powered by blists - more mailing lists