[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160824145942.GB7905@breakpoint.cc>
Date: Wed, 24 Aug 2016 16:59:42 +0200
From: Florian Westphal <fw@...len.de>
To: Shmulik Ladkani <shmulik.ladkani@...il.com>
Cc: Florian Westphal <fw@...len.de>,
"David S. Miller" <davem@...emloft.net>, netdev@...r.kernel.org,
Hannes Frederic Sowa <hannes@...essinduktion.org>,
Eric Dumazet <edumazet@...gle.com>
Subject: Re: [RFC PATCH] net: ip_finish_output_gso: Attempt gso_size clamping
if segments exceed mtu
Shmulik Ladkani <shmulik.ladkani@...il.com> wrote:
> > Normal ipv4 routing via vm1, no iptables etc. present, so
> >
> > we have hypervisor 1500 -> 1500 VM1 1280 -> 1280 VM2
> >
> > Turning off gro avoids this problem.
>
> I hit the BUG only when VM2's mtu is not set to 1280 (kept to the 1500
> default).
Right,
> Otherwise, Hypervisor's TCP stack (sender) uses TCP MSS advertised by
> VM2 (which is 1240 if VM2 mtu properly configured), thus GRO taking
> place in VM1's eth0 is based on arriving segments (sized 1240).
True.
> Only if VM2 has mtu of 1500, the MSS seen by Hypervisor during handshake
> is 1460, thus GRO acting on VM1's eth0 is based on 1460 byte segments.
> This leads to "gso clamping" taking place, with the BUG in skb_segment
> (which btw, seems sensitive to change in gso_size only if GRO was
> merging into frag_list).
>
> Can you please acknowledge our setup and reproduction are aligned?
Yes, seems setups are identical.
Powered by blists - more mailing lists