[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <1670906381-25161-1-git-send-email-quic_subashab@quicinc.com>
Date: Mon, 12 Dec 2022 21:39:41 -0700
From: Subash Abhinov Kasiviswanathan <quic_subashab@...cinc.com>
To: <ast@...nel.org>, <daniel@...earbox.net>, <andrii@...nel.org>,
<martin.lau@...ux.dev>, <john.fastabend@...il.com>,
<song@...nel.org>, <yhs@...com>, <kpsingh@...nel.org>,
<sdf@...gle.com>, <haoluo@...gle.com>, <jolsa@...nel.org>,
<davem@...emloft.net>, <edumazet@...gle.com>, <kuba@...nel.org>,
<pabeni@...hat.com>, <bpf@...r.kernel.org>,
<netdev@...r.kernel.org>
CC: Subash Abhinov Kasiviswanathan <quic_subashab@...cinc.com>,
"Sean Tranchetti" <quic_stranche@...cinc.com>
Subject: [PATCH net] filter: Account for tail adjustment during pull operations
Extending the tail can have some unexpected side effects if a program is
reading the content beyond the head skb headlen and all the skbs in the
gso frag_list are linear with no head_frag -
kernel BUG at net/core/skbuff.c:4219!
pc : skb_segment+0xcf4/0xd2c
lr : skb_segment+0x63c/0xd2c
Call trace:
skb_segment+0xcf4/0xd2c
__udp_gso_segment+0xa4/0x544
udp4_ufo_fragment+0x184/0x1c0
inet_gso_segment+0x16c/0x3a4
skb_mac_gso_segment+0xd4/0x1b0
__skb_gso_segment+0xcc/0x12c
udp_rcv_segment+0x54/0x16c
udp_queue_rcv_skb+0x78/0x144
udp_unicast_rcv_skb+0x8c/0xa4
__udp4_lib_rcv+0x490/0x68c
udp_rcv+0x20/0x30
ip_protocol_deliver_rcu+0x1b0/0x33c
ip_local_deliver+0xd8/0x1f0
ip_rcv+0x98/0x1a4
deliver_ptype_list_skb+0x98/0x1ec
__netif_receive_skb_core+0x978/0xc60
Fix this by marking these skbs as GSO_DODGY so segmentation can handle
the tail updates accordingly.
Fixes: 5293efe62df8 ("bpf: add bpf_skb_change_tail helper")
Signed-off-by: Sean Tranchetti <quic_stranche@...cinc.com>
Signed-off-by: Subash Abhinov Kasiviswanathan <quic_subashab@...cinc.com>
---
net/core/filter.c | 14 ++++++++++++++
1 file changed, 14 insertions(+)
diff --git a/net/core/filter.c b/net/core/filter.c
index bb0136e..d5f7f79 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -1654,6 +1654,20 @@ static DEFINE_PER_CPU(struct bpf_scratchpad, bpf_sp);
static inline int __bpf_try_make_writable(struct sk_buff *skb,
unsigned int write_len)
{
+ struct sk_buff *list_skb = skb_shinfo(skb)->frag_list;
+
+ if (skb_is_gso(skb) && list_skb && !list_skb->head_frag &&
+ skb_headlen(list_skb)) {
+ int headlen = skb_headlen(skb);
+ int err = skb_ensure_writable(skb, write_len);
+
+ /* pskb_pull_tail() has occurred */
+ if (!err && headlen != skb_headlen(skb))
+ skb_shinfo(skb)->gso_type |= SKB_GSO_DODGY;
+
+ return err;
+ }
+
return skb_ensure_writable(skb, write_len);
}
--
2.7.4
Powered by blists - more mailing lists