lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 19 Jun 2007 01:25:57 +0300
From:	"Ilpo Järvinen" <>
To:	David Miller <>
	"Ilpo Järvinen" <>
Subject: [RFC PATCH tcp-2.6 1/2] [TCP]: Discard fuzzy SACK blocks

From: =?ISO-8859-1?q?Ilpo_J=E4rvinen?= <>

SACK processing code has been sort of russian roulette as no
validation of SACK blocks is previously attempted. It is not
very clear what all kinds of broken SACK blocks really mean
(e.g., one that has start and end sequence numbers reversed).

This fixes also one remote triggerable NULL-ptr access:
start_seq was trusted as a valid mark_lost_entry_seq but
without validation it is relatively easy to inject an invalid
sequence number there that will cause a crash in the lost
marker code. Other SACK processing code seems safe so this could
have been fixed locally but that would leave SACK processing
hazardous in case of future modifications. IMHO, it's just
better to close the whole roulette rather than disarming just
one bullet.

Signed-off-by: Ilpo Järvinen <>
 net/ipv4/tcp_input.c |   40 ++++++++++++++++++++++++++++++++++++++++
 1 files changed, 40 insertions(+), 0 deletions(-)

diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
index 4c8882e..ebd9739 100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -977,6 +977,27 @@ static void tcp_update_reordering(struct sock *sk, const int metric,
  * so that we are allowed not to be bothered by order of our actions,
  * when multiple events arrive simultaneously. (see the function below).
+ * SACK block validation.
+ * ----------------------
+ *
+ * SACK block range validation checks that the received SACK block fits to
+ * the expected sequence limits, i.e., it is between SND.UNA and SND.NXT.
+ * Note that SND.UNA is not included to the range though being valid because
+ * it means that the receiver is rather inconsistent with itself (reports
+ * SACK reneging when it should advance SND.UNA).
+ *
+ * With D-SACK the lower bound is extended to cover sequence space below
+ * SND.UNA down to undo_marker, which is the last point of interest. But
+ * there all simplicity ends, TCP might receive valid D-SACKs below that.
+ * As long as they reside fully below undo_marker they do not affect
+ * behavior in anyway and can therefore be safely ignored. In rare cases
+ * (which are really theoretical ones), the D-SACK will nicely cross that
+ * boundary due to skb fragmentation and packet reordering past skb's
+ * retransmission. To consider them correctly, the acceptable range must
+ * be extended even more though the exact amount is rather hard to
+ * quantify (from the state TCP has?), so in boundary crossing case
+ * simply adjust start_seq to undo_marker.
+ *
  * Reordering detection.
  * --------------------
  * Reordering metric is maximal distance, which a packet can be displaced
@@ -1243,9 +1264,28 @@ tcp_sacktag_write_queue(struct sock *sk, struct sk_buff *ack_skb,
 		struct sk_buff *skb;
 		__u32 start_seq = ntohl(sp->start_seq);
 		__u32 end_seq = ntohl(sp->end_seq);
+		u32 valid_range_start;
 		int fack_count;
 		state.dup_sack = (found_dup_sack && (i == state.first_sack_index));
+		/* SACK validation: start_seq == snd_una is non-sensical */
+		valid_range_start = tp->snd_una + 1;
+		if (state.dup_sack && tp->undo_marker) {
+			valid_range_start = tp->undo_marker;
+			/* Undo_marker boundary crossing, see comments above */
+			if (before(start_seq, tp->undo_marker) &&
+			    after(end_seq, tp->undo_marker))
+			    	start_seq = tp->undo_marker;
+		}
+		/* Discard SACK blocks that are seemingly bad */
+		if (before(start_seq, valid_range_start) ||
+		    after(end_seq, tp->snd_nxt) ||
+		    !before(start_seq, end_seq))
+			continue;
 		skb = cached_skb;
 		fack_count = cached_fack_count;

To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to
More majordomo info at

Powered by blists - more mailing lists