[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <87czyq13xg.fsf@dja-thinkpad.axtens.net>
Date: Fri, 01 Jan 2021 00:49:15 +1100
From: Daniel Axtens <dja@...ens.net>
To: Aya Levin <ayal@...dia.com>,
"David S. Miller" <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>
Cc: netdev@...r.kernel.org, Moshe Shemesh <moshe@...lanox.com>,
Tariq Toukan <tariqt@...dia.com>, Aya Levin <ayal@...dia.com>
Subject: Re: [PATCH net] net: ipv6: Validate GSO SKB before finish IPv6 processing
Hi Aya,
> Daniel Axtens, does this solve the issue referred in your commit?
> 8914a595110a bnx2x: disable GSO where gso_size is too big for hardware
No, because:
> Note: These cases are handled in the same manner in IPv4 output finish.
> This patch aligns the behavior of IPv6 and IPv4.
and the issue my patch addresses was hit on IPv4, not IPv6.
In my case, the issue was that a packet came in on a ibmveth interface,
GSO but with a gso_size that's very very large. Open vSwitch then
transferred it to the bnx2x interface, where the firmware promptly
paniced.
There's a nice long description at
https://patchwork.ozlabs.org/project/netdev/patch/20180111235905.10110-1-dja@axtens.net/
however the description wasn't included in the commit message.
Kind regards,
Daniel
>
> Thanks,
> Aya
>
> diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
> index 749ad72386b2..36466f2211bd 100644
> --- a/net/ipv6/ip6_output.c
> +++ b/net/ipv6/ip6_output.c
> @@ -125,8 +125,43 @@ static int ip6_finish_output2(struct net *net, struct sock *sk, struct sk_buff *
> return -EINVAL;
> }
>
> +static int
> +ip6_finish_output_gso_slowpath_drop(struct net *net, struct sock *sk,
> + struct sk_buff *skb, unsigned int mtu)
> +{
> + struct sk_buff *segs, *nskb;
> + netdev_features_t features;
> + int ret;
> +
> + /* Please see corresponding comment in ip_finish_output_gso
> + * describing the cases where GSO segment length exceeds the
> + * egress MTU.
> + */
> + features = netif_skb_features(skb);
> + segs = skb_gso_segment(skb, features & ~NETIF_F_GSO_MASK);
> + if (IS_ERR_OR_NULL(segs)) {
> + kfree_skb(skb);
> + return -ENOMEM;
> + }
> +
> + consume_skb(skb);
> +
> + skb_list_walk_safe(segs, segs, nskb) {
> + int err;
> +
> + skb_mark_not_on_list(segs);
> + err = ip6_fragment(net, sk, segs, ip6_finish_output2);
> + if (err && ret == 0)
> + ret = err;
> + }
> +
> + return ret;
> +}
> +
> static int __ip6_finish_output(struct net *net, struct sock *sk, struct sk_buff *skb)
> {
> + unsigned int mtu;
> +
> #if defined(CONFIG_NETFILTER) && defined(CONFIG_XFRM)
> /* Policy lookup after SNAT yielded a new policy */
> if (skb_dst(skb)->xfrm) {
> @@ -135,7 +170,11 @@ static int __ip6_finish_output(struct net *net, struct sock *sk, struct sk_buff
> }
> #endif
>
> - if ((skb->len > ip6_skb_dst_mtu(skb) && !skb_is_gso(skb)) ||
> + mtu = ip6_skb_dst_mtu(skb);
> + if (skb_is_gso(skb) && !skb_gso_validate_network_len(skb, mtu))
> + return ip6_finish_output_gso_slowpath_drop(net, sk, skb, mtu);
> +
> + if ((skb->len > mtu && !skb_is_gso(skb)) ||
> dst_allfrag(skb_dst(skb)) ||
> (IP6CB(skb)->frag_max_size && skb->len > IP6CB(skb)->frag_max_size))
> return ip6_fragment(net, sk, skb, ip6_finish_output2);
> --
> 2.14.1
Powered by blists - more mailing lists