lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 20 Feb 2013 15:56:59 +0000
From:	"Roberts, Lee A." <lee.roberts@...com>
To:	"linux-sctp@...r.kernel.org" <linux-sctp@...r.kernel.org>,
	"netdev@...r.kernel.org" <netdev@...r.kernel.org>
CC:	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: [PATCH 3/3] sctp: fix association hangs due to reassembly/ordering
 logic

From: Lee A. Roberts <lee.roberts@...com>

Resolve SCTP association hangs observed during SCTP stress
testing.  Observable symptoms include communications hangs
with data being held in the association reassembly and/or lobby
(ordering) queues.  Close examination of reassembly queue shows
missing packets.

In sctp_eat_data(), enter partial delivery mode only if the
data on the head of the reassembly queue is at or before the
cumulative TSN ACK point.

In sctp_ulpq_retrieve_partial() and sctp_ulpq_retrieve_first(),
correct message reassembly logic for SCTP partial delivery.
Change logic to ensure that as much data as possible is sent
with the initial partial delivery and that following partial
deliveries contain all available data.

In sctp_ulpq_renege(), adjust logic to enter partial delivery
only if the incoming chunk remains on the reassembly queue
after processing by sctp_ulpq_tail_data().  Remove call to
sctp_tsnmap_mark(), as this is handled correctly in call to
sctp_ulpq_tail_data().

Patch applies to linux-3.8 kernel.

Signed-off-by: Lee A. Roberts <lee.roberts@...com>
---
 net/sctp/sm_statefuns.c |   12 ++++++++++--
 net/sctp/ulpqueue.c     |   33 ++++++++++++++++++++++++++-------
 2 files changed, 36 insertions(+), 9 deletions(-)

diff -uprN -X linux-3.8-vanilla/Documentation/dontdiff linux-3.8-SCTP
+2/net/sctp/sm_statefuns.c linux-3.8-SCTP+3/net/sctp/sm_statefuns.c
--- linux-3.8-SCTP+2/net/sctp/sm_statefuns.c	2013-02-18
16:58:34.000000000 -0700
+++ linux-3.8-SCTP+3/net/sctp/sm_statefuns.c	2013-02-20
08:31:51.092132884 -0700
@@ -6090,7 +6090,8 @@ static int sctp_eat_data(const struct sc
 	size_t datalen;
 	sctp_verb_t deliver;
 	int tmp;
-	__u32 tsn;
+	__u32 tsn, ctsn;
+	struct sk_buff *skb;
 	struct sctp_tsnmap *map = (struct sctp_tsnmap *)&asoc->peer.tsn_map;
 	struct sock *sk = asoc->base.sk;
 	struct net *net = sock_net(sk);
@@ -6160,7 +6161,14 @@ static int sctp_eat_data(const struct sc
 		/* Even if we don't accept this chunk there is
 		 * memory pressure.
 		 */
-		sctp_add_cmd_sf(commands, SCTP_CMD_PART_DELIVER, SCTP_NULL());
+		skb = skb_peek(&asoc->ulpq.reasm);
+		if (skb != NULL) {
+			ctsn = sctp_skb2event(skb)->tsn;
+			if (TSN_lte(ctsn,
+				sctp_tsnmap_get_ctsn(&asoc->peer.tsn_map)))
+				sctp_add_cmd_sf(commands,
+					SCTP_CMD_PART_DELIVER, SCTP_NULL());
+		}
 	}
 
 	/* Spill over rwnd a little bit.  Note: While allowed, this spill over
diff -uprN -X linux-3.8-vanilla/Documentation/dontdiff linux-3.8-SCTP
+2/net/sctp/ulpqueue.c linux-3.8-SCTP+3/net/sctp/ulpqueue.c
--- linux-3.8-SCTP+2/net/sctp/ulpqueue.c	2013-02-20 08:17:53.679233365
-0700
+++ linux-3.8-SCTP+3/net/sctp/ulpqueue.c	2013-02-20 08:27:02.785042744
-0700
@@ -540,14 +540,19 @@ static struct sctp_ulpevent *sctp_ulpq_r
 		ctsn = cevent->tsn;
 
 		switch (cevent->msg_flags & SCTP_DATA_FRAG_MASK) {
+		case SCTP_DATA_FIRST_FRAG:
+			if (!first_frag)
+				return NULL;
+			goto done;
 		case SCTP_DATA_MIDDLE_FRAG:
 			if (!first_frag) {
 				first_frag = pos;
 				next_tsn = ctsn + 1;
 				last_frag = pos;
-			} else if (next_tsn == ctsn)
+			} else if (next_tsn == ctsn) {
 				next_tsn++;
-			else
+				last_frag = pos;
+			} else
 				goto done;
 			break;
 		case SCTP_DATA_LAST_FRAG:
@@ -651,6 +656,14 @@ static struct sctp_ulpevent *sctp_ulpq_r
 			} else
 				goto done;
 			break;
+
+		case SCTP_DATA_LAST_FRAG:
+			if (!first_frag)
+				return NULL;
+			else
+				goto done;
+			break;
+
 		default:
 			return NULL;
 		}
@@ -1054,6 +1067,7 @@ void sctp_ulpq_renege(struct sctp_ulpq *
 		      gfp_t gfp)
 {
 	struct sctp_association *asoc;
+	struct sk_buff *skb;
 	__u16 needed, freed;
 
 	asoc = ulpq->asoc;
@@ -1074,12 +1088,17 @@ void sctp_ulpq_renege(struct sctp_ulpq *
 	}
 	/* If able to free enough room, accept this chunk. */
 	if (chunk && (freed >= needed)) {
-		__u32 tsn;
+		__u32 tsn, ctsn;
 		tsn = ntohl(chunk->subh.data_hdr->tsn);
-		sctp_tsnmap_mark(&asoc->peer.tsn_map, tsn, chunk->transport);
-		sctp_ulpq_tail_data(ulpq, chunk, gfp);
-
-		sctp_ulpq_partial_delivery(ulpq, gfp);
+		if (sctp_ulpq_tail_data(ulpq, chunk, gfp) == 0) {
+			skb = skb_peek(&ulpq->reasm);
+			if (skb != NULL) {
+				ctsn = sctp_skb2event(skb)->tsn;
+				if (TSN_lte(ctsn, tsn))
+					sctp_ulpq_partial_delivery(ulpq, chunk,
+						gfp);
+			}
+		}
 	}
 
 	sk_mem_reclaim(asoc->base.sk);

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ