[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230807071022.10091-1-hare@suse.de>
Date: Mon, 7 Aug 2023 09:10:22 +0200
From: Hannes Reinecke <hare@...e.de>
To: Christoph Hellwig <hch@....de>
Cc: Sagi Grimberg <sagi@...mberg.me>,
Keith Busch <kbusch@...nel.org>,
linux-nvme@...ts.infradead.org,
Jakub Kicinski <kuba@...nel.org>,
Eric Dumazet <edumazet@...gle.com>,
Paolo Abeni <pabeni@...hat.com>,
netdev@...r.kernel.org,
Hannes Reinecke <hare@...e.de>
Subject: [PATCHv2] net/tls: avoid TCP window full during ->read_sock()
When flushing the backlog after decoding a record we don't really
know how much data the caller want us to evaluate, so use INT_MAX
and 0 as arguments to tls_read_flush_backlog() to ensure we flush
at 128k of data. Otherwise we might be reading too much data and
trigger a TCP window full.
Suggested-by: Jakub Kicinski <kuba@...nel.org>
Signed-off-by: Hannes Reinecke <hare@...e.de>
---
net/tls/tls_sw.c | 13 +++++--------
1 file changed, 5 insertions(+), 8 deletions(-)
diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
index 9c1f13541708..5c122d7bb784 100644
--- a/net/tls/tls_sw.c
+++ b/net/tls/tls_sw.c
@@ -2240,7 +2240,6 @@ int tls_sw_read_sock(struct sock *sk, read_descriptor_t *desc,
tlm = tls_msg(skb);
} else {
struct tls_decrypt_arg darg;
- int to_decrypt;
err = tls_rx_rec_wait(sk, NULL, true, released);
if (err <= 0)
@@ -2248,20 +2247,18 @@ int tls_sw_read_sock(struct sock *sk, read_descriptor_t *desc,
memset(&darg.inargs, 0, sizeof(darg.inargs));
- rxm = strp_msg(tls_strp_msg(ctx));
- tlm = tls_msg(tls_strp_msg(ctx));
-
- to_decrypt = rxm->full_len - prot->overhead_size;
-
err = tls_rx_one_record(sk, NULL, &darg);
if (err < 0) {
tls_err_abort(sk, -EBADMSG);
goto read_sock_end;
}
- released = tls_read_flush_backlog(sk, prot, rxm->full_len, to_decrypt,
- decrypted, &flushed_at);
+ released = tls_read_flush_backlog(sk, prot, INT_MAX,
+ 0, decrypted,
+ &flushed_at);
skb = darg.skb;
+ rxm = strp_msg(skb);
+ tlm = tls_msg(skb);
decrypted += rxm->full_len;
tls_rx_rec_done(ctx);
--
2.35.3
Powered by blists - more mailing lists