[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20250922192404.764714238@linuxfoundation.org>
Date: Mon, 22 Sep 2025 21:29:36 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: stable@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
patches@...ts.linux.dev,
David Howells <dhowells@...hat.com>,
Herbert Xu <herbert@...dor.apana.org.au>,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>,
Jens Axboe <axboe@...nel.dk>,
Matthew Wilcox <willy@...radead.org>,
linux-crypto@...r.kernel.org,
netdev@...r.kernel.org,
Sasha Levin <sashal@...nel.org>
Subject: [PATCH 6.1 43/61] crypto: af_alg: Indent the loop in af_alg_sendmsg()
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: David Howells <dhowells@...hat.com>
[ Upstream commit 73d7409cfdad7fd08a9203eb2912c1c77e527776 ]
Put the loop in af_alg_sendmsg() into an if-statement to indent it to make
the next patch easier to review as that will add another branch to handle
MSG_SPLICE_PAGES to the if-statement.
Signed-off-by: David Howells <dhowells@...hat.com>
cc: Herbert Xu <herbert@...dor.apana.org.au>
cc: "David S. Miller" <davem@...emloft.net>
cc: Eric Dumazet <edumazet@...gle.com>
cc: Jakub Kicinski <kuba@...nel.org>
cc: Paolo Abeni <pabeni@...hat.com>
cc: Jens Axboe <axboe@...nel.dk>
cc: Matthew Wilcox <willy@...radead.org>
cc: linux-crypto@...r.kernel.org
cc: netdev@...r.kernel.org
Acked-by: Herbert Xu <herbert@...dor.apana.org.au>
Signed-off-by: Paolo Abeni <pabeni@...hat.com>
Stable-dep-of: 9574b2330dbd ("crypto: af_alg - Set merge to zero early in af_alg_sendmsg")
Signed-off-by: Sasha Levin <sashal@...nel.org>
---
crypto/af_alg.c | 51 ++++++++++++++++++++++++++-----------------------
1 file changed, 27 insertions(+), 24 deletions(-)
diff --git a/crypto/af_alg.c b/crypto/af_alg.c
index fef69d2a6b183..d5a8368a47c5c 100644
--- a/crypto/af_alg.c
+++ b/crypto/af_alg.c
@@ -927,35 +927,38 @@ int af_alg_sendmsg(struct socket *sock, struct msghdr *msg, size_t size,
if (sgl->cur)
sg_unmark_end(sg + sgl->cur - 1);
- do {
- struct page *pg;
- unsigned int i = sgl->cur;
+ if (1 /* TODO check MSG_SPLICE_PAGES */) {
+ do {
+ struct page *pg;
+ unsigned int i = sgl->cur;
- plen = min_t(size_t, len, PAGE_SIZE);
+ plen = min_t(size_t, len, PAGE_SIZE);
- pg = alloc_page(GFP_KERNEL);
- if (!pg) {
- err = -ENOMEM;
- goto unlock;
- }
+ pg = alloc_page(GFP_KERNEL);
+ if (!pg) {
+ err = -ENOMEM;
+ goto unlock;
+ }
- sg_assign_page(sg + i, pg);
+ sg_assign_page(sg + i, pg);
- err = memcpy_from_msg(page_address(sg_page(sg + i)),
- msg, plen);
- if (err) {
- __free_page(sg_page(sg + i));
- sg_assign_page(sg + i, NULL);
- goto unlock;
- }
+ err = memcpy_from_msg(
+ page_address(sg_page(sg + i)),
+ msg, plen);
+ if (err) {
+ __free_page(sg_page(sg + i));
+ sg_assign_page(sg + i, NULL);
+ goto unlock;
+ }
- sg[i].length = plen;
- len -= plen;
- ctx->used += plen;
- copied += plen;
- size -= plen;
- sgl->cur++;
- } while (len && sgl->cur < MAX_SGL_ENTS);
+ sg[i].length = plen;
+ len -= plen;
+ ctx->used += plen;
+ copied += plen;
+ size -= plen;
+ sgl->cur++;
+ } while (len && sgl->cur < MAX_SGL_ENTS);
+ }
if (!size)
sg_mark_end(sg + sgl->cur - 1);
--
2.51.0
Powered by blists - more mailing lists