[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130206155049.GB14735@redhat.com>
Date: Wed, 6 Feb 2013 17:50:49 +0200
From: "Michael S. Tsirkin" <mst@...hat.com>
To: Eric Dumazet <eric.dumazet@...il.com>
Cc: alexander.h.duyck@...el.com, stephen.s.ko@...el.com,
jeffrey.t.kirsher@...el.com, David Miller <davem@...emloft.net>,
netdev@...r.kernel.org, sony.chacko@...gic.com, mchan@...adcom.com,
jitendra.kalsaria@...gic.com, eilong@...adcom.com
Subject: Re: regression caused by 1d2024f61ec14bdb0c57a97a3fe73685abc2d198?
On Wed, Feb 06, 2013 at 05:07:39AM -0800, Eric Dumazet wrote:
> On Wed, 2013-02-06 at 13:43 +0200, Michael S. Tsirkin wrote:
> > It seems that starting with kernel 3.3 ixgbe sets gso_size for
> > incoming frames. It seems that this might result in gso_size
> > being set even when gso_type is 0.
> > This in turn leads to a crash at macvtap_skb_to_vnet_hdr
> > drivers/net/macvtap.c:628
> > which has this code:
> >
> > if (skb_is_gso(skb)) {
> > struct skb_shared_info *sinfo = skb_shinfo(skb);
> >
> > /* This is a hint as to how much should be linear. */
> > vnet_hdr->hdr_len = skb_headlen(skb);
> > vnet_hdr->gso_size = sinfo->gso_size;
> > if (sinfo->gso_type & SKB_GSO_TCPV4)
> > vnet_hdr->gso_type = VIRTIO_NET_HDR_GSO_TCPV4;
> > else if (sinfo->gso_type & SKB_GSO_TCPV6)
> > vnet_hdr->gso_type = VIRTIO_NET_HDR_GSO_TCPV6;
> > else if (sinfo->gso_type & SKB_GSO_UDP)
> > vnet_hdr->gso_type = VIRTIO_NET_HDR_GSO_UDP;
> > else
> > BUG();
> > if (sinfo->gso_type & SKB_GSO_TCP_ECN)
> > vnet_hdr->gso_type |= VIRTIO_NET_HDR_GSO_ECN;
> > } else
> > vnet_hdr->gso_type = VIRTIO_NET_HDR_GSO_NONE;
> >
> >
> > Since skb_is_gso tests gso_size.
> >
> > What's the right way to handle this? Should skb_is_gso be
> > changed to test gso_type != 0?
> >
>
> Or fix ixgbe to set gso_type in ixgbe_get_headlen(), as it does all the
> dissection.
Hmm, ixgbe_get_headlen isn't run on linear skbs though.
Also, I'm not sure I understand when should drivers set gso size
for incoming messages and what is a reasonable value.
Commit log talks about improved performance for lossy connections,
in this case, isn't this something net core should set?
I see 3 in-tree drivers that do this:
drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c: skb_shinfo(skb)->gso_size = bnx2x
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c: skb_shinfo(skb)->gso_size = DIV_ROUND_UP((skb->le
drivers/net/ethernet/qlogic/qlcnic/qlcnic_io.c: skb_shinfo(skb)->gso_size = qlcnic_get_lr
It seems likely the same issue applies there?
--
MST
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists