lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20251119224140.8616-42-david.laight.linux@gmail.com>
Date: Wed, 19 Nov 2025 22:41:37 +0000
From: david.laight.linux@...il.com
To: linux-kernel@...r.kernel.org,
	bpf@...r.kernel.org,
	netdev@...r.kernel.org
Cc: "David S. Miller" <davem@...emloft.net>,
	Eric Dumazet <edumazet@...gle.com>,
	Jakub Kicinski <kuba@...nel.org>,
	Jakub Sitnicki <jakub@...udflare.com>,
	John Fastabend <john.fastabend@...il.com>,
	Paolo Abeni <pabeni@...hat.com>,
	David Laight <david.laight.linux@...il.com>
Subject: [PATCH 41/44] net/core: Change loop conditions so min() can be used

From: David Laight <david.laight.linux@...il.com>

Loops like:
	int copied = ...;
	...
	while (copied) {
		use = min_t(type, copied, PAGE_SIZE - offset);
		...
		copied -= 0;
	}
can be converted to a plain min() if the comparison is changed to:
	while (copied > 0) {
This removes any chance of high bits being discded by min_t().
(In the case above PAGE_SIZE is 64bits so the 'int' cast is safe,
but there are plenty of cases where the check shows up bugs.)

Signed-off-by: David Laight <david.laight.linux@...il.com>
---
 net/core/datagram.c | 6 +++---
 net/core/skmsg.c    | 4 ++--
 2 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/net/core/datagram.c b/net/core/datagram.c
index c285c6465923..555f38b89729 100644
--- a/net/core/datagram.c
+++ b/net/core/datagram.c
@@ -664,8 +664,8 @@ int zerocopy_fill_skb_from_iter(struct sk_buff *skb,
 		head = compound_head(pages[n]);
 		order = compound_order(head);
 
-		for (refs = 0; copied != 0; start = 0) {
-			int size = min_t(int, copied, PAGE_SIZE - start);
+		for (refs = 0; copied > 0; start = 0) {
+			int size = min(copied, PAGE_SIZE - start);
 
 			if (pages[n] - head > (1UL << order) - 1) {
 				head = compound_head(pages[n]);
@@ -783,7 +783,7 @@ EXPORT_SYMBOL(__zerocopy_sg_from_iter);
  */
 int zerocopy_sg_from_iter(struct sk_buff *skb, struct iov_iter *from)
 {
-	int copy = min_t(int, skb_headlen(skb), iov_iter_count(from));
+	int copy = min(skb_headlen(skb), iov_iter_count(from));
 
 	/* copy up to skb headlen */
 	if (skb_copy_datagram_from_iter(skb, 0, from, copy))
diff --git a/net/core/skmsg.c b/net/core/skmsg.c
index 2ac7731e1e0a..b58e319f4e2e 100644
--- a/net/core/skmsg.c
+++ b/net/core/skmsg.c
@@ -335,8 +335,8 @@ int sk_msg_zerocopy_from_iter(struct sock *sk, struct iov_iter *from,
 		bytes -= copied;
 		msg->sg.size += copied;
 
-		while (copied) {
-			use = min_t(int, copied, PAGE_SIZE - offset);
+		while (copied > 0) {
+			use = min(copied, PAGE_SIZE - offset);
 			sg_set_page(&msg->sg.data[msg->sg.end],
 				    pages[i], use, offset);
 			sg_unmark_end(&msg->sg.data[msg->sg.end]);
-- 
2.39.5


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ