[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20191127201646.25455-7-jakub.kicinski@netronome.com>
Date: Wed, 27 Nov 2019 12:16:44 -0800
From: Jakub Kicinski <jakub.kicinski@...ronome.com>
To: davem@...emloft.net, john.fastabend@...il.com
Cc: netdev@...r.kernel.org, oss-drivers@...ronome.com,
borisp@...lanox.com, aviadye@...lanox.com, daniel@...earbox.net,
Jakub Kicinski <jakub.kicinski@...ronome.com>,
Simon Horman <simon.horman@...ronome.com>
Subject: [PATCH net 6/8] net/tls: use sg_next() to walk sg entries
Partially sent record cleanup path increments an SG entry
directly instead of using sg_next(). This should not be a
problem today, as encrypted messages should be always
allocated as arrays. But given this is a cleanup path it's
easy to miss was this ever to change. Use sg_next(), and
simplify the code.
Signed-off-by: Jakub Kicinski <jakub.kicinski@...ronome.com>
Reviewed-by: Simon Horman <simon.horman@...ronome.com>
---
include/net/tls.h | 2 +-
net/tls/tls_main.c | 13 ++-----------
net/tls/tls_sw.c | 3 ++-
3 files changed, 5 insertions(+), 13 deletions(-)
diff --git a/include/net/tls.h b/include/net/tls.h
index 9d32f7ce6b31..df630f5fc723 100644
--- a/include/net/tls.h
+++ b/include/net/tls.h
@@ -376,7 +376,7 @@ int tls_push_sg(struct sock *sk, struct tls_context *ctx,
int flags);
int tls_push_partial_record(struct sock *sk, struct tls_context *ctx,
int flags);
-bool tls_free_partial_record(struct sock *sk, struct tls_context *ctx);
+void tls_free_partial_record(struct sock *sk, struct tls_context *ctx);
static inline struct tls_msg *tls_msg(struct sk_buff *skb)
{
diff --git a/net/tls/tls_main.c b/net/tls/tls_main.c
index bdca31ffe6da..b3da6c5ab999 100644
--- a/net/tls/tls_main.c
+++ b/net/tls/tls_main.c
@@ -209,24 +209,15 @@ int tls_push_partial_record(struct sock *sk, struct tls_context *ctx,
return tls_push_sg(sk, ctx, sg, offset, flags);
}
-bool tls_free_partial_record(struct sock *sk, struct tls_context *ctx)
+void tls_free_partial_record(struct sock *sk, struct tls_context *ctx)
{
struct scatterlist *sg;
- sg = ctx->partially_sent_record;
- if (!sg)
- return false;
-
- while (1) {
+ for (sg = ctx->partially_sent_record; sg; sg = sg_next(sg)) {
put_page(sg_page(sg));
sk_mem_uncharge(sk, sg->length);
-
- if (sg_is_last(sg))
- break;
- sg++;
}
ctx->partially_sent_record = NULL;
- return true;
}
static void tls_write_space(struct sock *sk)
diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
index 5989dfe5c443..2b2d0bae14a9 100644
--- a/net/tls/tls_sw.c
+++ b/net/tls/tls_sw.c
@@ -2089,7 +2089,8 @@ void tls_sw_release_resources_tx(struct sock *sk)
/* Free up un-sent records in tx_list. First, free
* the partially sent record if any at head of tx_list.
*/
- if (tls_free_partial_record(sk, tls_ctx)) {
+ if (tls_ctx->partially_sent_record) {
+ tls_free_partial_record(sk, tls_ctx);
rec = list_first_entry(&ctx->tx_list,
struct tls_rec, list);
list_del(&rec->list);
--
2.23.0
Powered by blists - more mailing lists