[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190116014032.25364-1-vakul.garg@nxp.com>
Date: Wed, 16 Jan 2019 01:42:44 +0000
From: Vakul Garg <vakul.garg@....com>
To: "netdev@...r.kernel.org" <netdev@...r.kernel.org>
CC: "john.fastabend@...il.com" <john.fastabend@...il.com>,
"daniel@...earbox.net" <daniel@...earbox.net>,
"davejwatson@...com" <davejwatson@...com>,
"davem@...emloft.net" <davem@...emloft.net>,
Vakul Garg <vakul.garg@....com>
Subject: [RESEND PATCH net-next] Optimize sk_msg_clone() by data merge to end
dst sg entry
Function sk_msg_clone has been modified to merge the data from source sg
entry to destination sg entry if the cloned data resides in same page
and is contiguous to the end entry of destination sk_msg. This improves
kernel tls throughput to the tune of 10%.
When the user space tls application calls sendmsg() with MSG_MORE, it leads
to calling sk_msg_clone() with new data being cloned placed continuous to
previously cloned data. Without this optimization, a new SG entry in
the destination sk_msg i.e. rec->msg_plaintext in tls_clone_plaintext_msg()
gets used. This leads to exhaustion of sg entries in rec->msg_plaintext
even before a full 16K of allowable record data is accumulated. Hence we
lose oppurtunity to encrypt and send a full 16K record.
With this patch, the kernel tls can accumulate full 16K of record data
irrespective of the size of data passed in sendmsg() with MSG_MORE.
Signed-off-by: Vakul Garg <vakul.garg@....com>
---
The patch is being resent since it net-next was closed when it was sent
earlier.
net/core/skmsg.c | 25 +++++++++++++++++--------
1 file changed, 17 insertions(+), 8 deletions(-)
diff --git a/net/core/skmsg.c b/net/core/skmsg.c
index 26458876d763..f15393ab7fe1 100644
--- a/net/core/skmsg.c
+++ b/net/core/skmsg.c
@@ -78,11 +78,9 @@ int sk_msg_clone(struct sock *sk, struct sk_msg *dst, struct sk_msg *src,
{
int i = src->sg.start;
struct scatterlist *sge = sk_msg_elem(src, i);
+ struct scatterlist *sgd = NULL;
u32 sge_len, sge_off;
- if (sk_msg_full(dst))
- return -ENOSPC;
-
while (off) {
if (sge->length > off)
break;
@@ -94,16 +92,27 @@ int sk_msg_clone(struct sock *sk, struct sk_msg *dst, struct sk_msg *src,
}
while (len) {
- if (sk_msg_full(dst))
- return -ENOSPC;
-
sge_len = sge->length - off;
- sge_off = sge->offset + off;
if (sge_len > len)
sge_len = len;
+
+ if (dst->sg.end)
+ sgd = sk_msg_elem(dst, dst->sg.end - 1);
+
+ if (sgd &&
+ (sg_page(sge) == sg_page(sgd)) &&
+ (sg_virt(sge) + off == sg_virt(sgd) + sgd->length)) {
+ sgd->length += sge_len;
+ dst->sg.size += sge_len;
+ } else if (!sk_msg_full(dst)) {
+ sge_off = sge->offset + off;
+ sk_msg_page_add(dst, sg_page(sge), sge_len, sge_off);
+ } else {
+ return -ENOSPC;
+ }
+
off = 0;
len -= sge_len;
- sk_msg_page_add(dst, sg_page(sge), sge_len, sge_off);
sk_mem_charge(sk, sge_len);
sk_msg_iter_var_next(i);
if (i == src->sg.end && len)
--
2.13.6
Powered by blists - more mailing lists