lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210617000953.2787453-4-zenczykowski@gmail.com>
Date:   Wed, 16 Jun 2021 17:09:53 -0700
From:   Maciej Żenczykowski <zenczykowski@...il.com>
To:     Maciej Żenczykowski <maze@...gle.com>,
        Alexei Starovoitov <ast@...nel.org>,
        Daniel Borkmann <daniel@...earbox.net>
Cc:     Linux Network Development Mailing List <netdev@...r.kernel.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        BPF Mailing List <bpf@...r.kernel.org>,
        "David S . Miller" <davem@...emloft.net>,
        Willem de Bruijn <willemb@...gle.com>
Subject: [PATCH bpf-next v2 4/4] bpf: more lenient bpf_skb_net_shrink() with BPF_F_ADJ_ROOM_FIXED_GSO

From: Maciej Żenczykowski <maze@...gle.com>

This is to more closely match behaviour of bpf_skb_change_proto()
which now does not adjust gso_size, and thus thoretically supports
all gso types, and does not need to set SKB_GSO_DODGY nor reset
gso_segs to zero.

Something similar should probably be done with bpf_skb_net_grow(),
but that code scares me.

Cc: Daniel Borkmann <daniel@...earbox.net>
Cc: Willem de Bruijn <willemb@...gle.com>
Signed-off-by: Maciej Żenczykowski <maze@...gle.com>
---
 net/core/filter.c | 14 ++++++--------
 1 file changed, 6 insertions(+), 8 deletions(-)

diff --git a/net/core/filter.c b/net/core/filter.c
index 8f05498f497e..faf2bae0309b 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -3506,11 +3506,10 @@ static int bpf_skb_net_shrink(struct sk_buff *skb, u32 off, u32 len_diff,
 			       BPF_F_ADJ_ROOM_NO_CSUM_RESET)))
 		return -EINVAL;
 
-	if (skb_is_gso(skb) && !skb_is_gso_tcp(skb)) {
-		/* udp gso_size delineates datagrams, only allow if fixed */
-		if (!(skb_shinfo(skb)->gso_type & SKB_GSO_UDP_L4) ||
-		    !(flags & BPF_F_ADJ_ROOM_FIXED_GSO))
-			return -ENOTSUPP;
+	if (skb_is_gso(skb) &&
+	    !skb_is_gso_tcp(skb) &&
+	    !(flags & BPF_F_ADJ_ROOM_FIXED_GSO)) {
+		return -ENOTSUPP;
 	}
 
 	ret = skb_unclone(skb, GFP_ATOMIC);
@@ -3521,12 +3520,11 @@ static int bpf_skb_net_shrink(struct sk_buff *skb, u32 off, u32 len_diff,
 	if (unlikely(ret < 0))
 		return ret;
 
-	if (skb_is_gso(skb)) {
+	if (skb_is_gso(skb) && !(flags & BPF_F_ADJ_ROOM_FIXED_GSO)) {
 		struct skb_shared_info *shinfo = skb_shinfo(skb);
 
 		/* Due to header shrink, MSS can be upgraded. */
-		if (!(flags & BPF_F_ADJ_ROOM_FIXED_GSO))
-			skb_increase_gso_size(shinfo, len_diff);
+		skb_increase_gso_size(shinfo, len_diff);
 
 		/* Header must be checked, and gso_segs recomputed. */
 		shinfo->gso_type |= SKB_GSO_DODGY;
-- 
2.32.0.272.g935e593368-goog

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ