lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190903185121.56906d31@pixies>
Date:   Tue, 3 Sep 2019 18:51:21 +0300
From:   Shmulik Ladkani <shmulik@...anetworks.com>
To:     Willem de Bruijn <willemdebruijn.kernel@...il.com>,
        Daniel Borkmann <daniel@...earbox.net>
Cc:     Eric Dumazet <eric.dumazet@...il.com>,
        netdev <netdev@...r.kernel.org>,
        Alexander Duyck <alexander.duyck@...il.com>,
        Alexei Starovoitov <ast@...nel.org>,
        Yonghong Song <yhs@...com>,
        Steffen Klassert <steffen.klassert@...unet.com>,
        Shmulik Ladkani <shmulik@...anetworks.com>,
        eyal@...anetworks.com
Subject: Re: BUG_ON in skb_segment, after bpf_skb_change_proto was applied

On Sun, 1 Sep 2019 16:05:48 -0400
Willem de Bruijn <willemdebruijn.kernel@...il.com> wrote:

> One quick fix is to disable sg and thus revert to copying in this
> case. Not ideal, but better than a kernel splat:
> 
> @@ -3714,6 +3714,9 @@ struct sk_buff *skb_segment(struct sk_buff *head_skb,
>         sg = !!(features & NETIF_F_SG);
>         csum = !!can_checksum_protocol(features, proto);
> 
> +       if (list_skb && skb_headlen(list_skb) && !list_skb->head_frag)
> +               sg = false;
> +

Thanks Willem.

I followed this approach, and further refined it based on the conditions
that lead to this BUG_ON:

 - existance of frag_list
 - mangled gso_size (using SKB_GSO_DODGY as a hint)
 - some frag in the frag_list has a linear part that is NOT head_frag,
   or length not equal to the requested gso_size

BTW, doing so allowed me to refactor a loop that tests for similar
conditions in the !(features & NETIF_F_GSO_PARTIAL) case, where an
attempt to execute partial splitting at the frag_list pointer (see
07b26c9454a2 and 43170c4e0ba7).

I've tested this using the reproducer, with various linear skbs in
the frag_list and different gso_size mangling. All resulting 'segs'
looked correct. Did not test on a live system yet.

Comments are welcome.

specifically, I would like to know whether we can
 - better refine the condition where this "sg=false fallback" needs
   to be applied
 - consolidate my new 'list_skb && (type & SKB_GSO_DODGY)' case with
   the existing '!(features & NETIF_F_GSO_PARTIAL)' case

see below:


@@ -3470,6 +3470,27 @@ static inline skb_frag_t skb_head_frag_to_page_desc(struct sk_buff *frag_skb)
 	return head_frag;
 }
 
+static inline bool skb_is_nonlinear_equal_frags(struct sk_buff *skb,
+						unsigned int total_len,
+		                                unsigned int frag_len,
+						unsigned int *remain)
+{
+	struct sk_buff *iter;
+
+	skb_walk_frags(skb, iter) {
+		if (iter->len != frag_len && iter->next)
+			return false;
+		if (skb_headlen(iter) && !iter->head_frag)
+			return false;
+
+		total_len -= iter->len;
+	}
+
+	if (remain)
+		*remain = total_len;
+	return total_len == frag_len;
+}
+
 /**
  *	skb_segment - Perform protocol segmentation on skb.
  *	@head_skb: buffer to segment
@@ -3486,6 +3507,7 @@ struct sk_buff *skb_segment(struct sk_buff *head_skb,
 	struct sk_buff *tail = NULL;
 	struct sk_buff *list_skb = skb_shinfo(head_skb)->frag_list;
 	skb_frag_t *frag = skb_shinfo(head_skb)->frags;
+	unsigned int type = skb_shinfo(head_skb)->gso_type;
 	unsigned int mss = skb_shinfo(head_skb)->gso_size;
 	unsigned int doffset = head_skb->data - skb_mac_header(head_skb);
 	struct sk_buff *frag_skb = head_skb;
@@ -3510,13 +3532,29 @@ struct sk_buff *skb_segment(struct sk_buff *head_skb,
 	sg = !!(features & NETIF_F_SG);
 	csum = !!can_checksum_protocol(features, proto);
 
-	if (sg && csum && (mss != GSO_BY_FRAGS))  {
+	if (sg && (mss != GSO_BY_FRAGS))  {
+		if (list_skb && (type & SKB_GSO_DODGY)) {
+			/* gso_size is untrusted.
+			 * if head_skb has a frag_list that contains a frag
+			 * with a linear (non head_frag) part, or a frag whose
+			 * length doesn't fit requested mss, fallback to skb
+			 * copying by disabling sg.
+			 */
+			if (!skb_is_nonlinear_equal_frags(head_skb, len, mss,
+						          NULL)) {
+				sg = false;
+				goto normal;
+			}
+		}
+
+		if (!csum)
+			goto normal;
+
 		if (!(features & NETIF_F_GSO_PARTIAL)) {
-			struct sk_buff *iter;
 			unsigned int frag_len;
 
 			if (!list_skb ||
-			    !net_gso_ok(features, skb_shinfo(head_skb)->gso_type))
+			    !net_gso_ok(features, type))
 				goto normal;
 
 			/* If we get here then all the required
@@ -3528,17 +3566,10 @@ struct sk_buff *skb_segment(struct sk_buff *head_skb,
 			 * the last are of the same length.
 			 */
 			frag_len = list_skb->len;
-			skb_walk_frags(head_skb, iter) {
-				if (frag_len != iter->len && iter->next)
-					goto normal;
-				if (skb_headlen(iter) && !iter->head_frag)
-					goto normal;
-
-				len -= iter->len;
-			}
-
-			if (len != frag_len)
+			if (!skb_is_nonlinear_equal_frags(head_skb, len,
+						          frag_len, &len)) {
 				goto normal;
+			}
 		}
 
 		/* GSO partial only requires that we trim off any excess that

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ