lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sun, 30 Nov 2014 10:55:01 +0000
From:	"Du, Fan" <fan.du@...el.com>
To:	Florian Westphal <fw@...len.de>
CC:	"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
	"davem@...emloft.net" <davem@...emloft.net>,
	"Du, Fan" <fan.du@...el.com>
Subject: RE: [PATCH net] gso: do GSO for local skb with size bigger than MTU



>-----Original Message-----
>From: Florian Westphal [mailto:fw@...len.de]
>Sent: Sunday, November 30, 2014 6:27 PM
>To: Du, Fan
>Cc: netdev@...r.kernel.org; davem@...emloft.net; fw@...len.de
>Subject: Re: [PATCH net] gso: do GSO for local skb with size bigger than MTU
>
>Fan Du <fan.du@...el.com> wrote:
>> Test scenario: two KVM guests sitting in different hosts communicate
>> to each other with a vxlan tunnel.
>>
>> All interface MTU is default 1500 Bytes, from guest point of view, its
>> skb gso_size could be as bigger as 1448Bytes, however after guest skb
>> goes through vxlan encapuslation, individual segments length of a gso
>> packet could exceed physical NIC MTU 1500, which will be lost at
>> recevier side.
>>
>> So it's possible in virtualized environment, locally created skb len
>> after encapslation could be bigger than underlayer MTU. In such case,
>> it's reasonable to do GSO first, then fragment any packet bigger than
>> MTU as possible.
>>
>> +---------------+ TX     RX +---------------+
>> |   KVM Guest   | -> ... -> |   KVM Guest   |
>> +-+-----------+-+           +-+-----------+-+
>>   |Qemu/VirtIO|               |Qemu/VirtIO|
>>   +-----------+               +-----------+
>>        |                            |
>>        v tap0                  tap0 v
>>   +-----------+               +-----------+
>>   | ovs bridge|               | ovs bridge|
>>   +-----------+               +-----------+
>>        | vxlan                vxlan |
>>        v                            v
>>   +-----------+               +-----------+
>>   |    NIC    |    <------>   |    NIC    |
>>   +-----------+               +-----------+
>>
>> Steps to reproduce:
>>  1. Using kernel builtin openvswitch module to setup ovs bridge.
>>  2. Runing iperf without -M, communication will stuck.
>
>Hmm, do we really want to suport bridges containing interfaces with different
>MTUs?

All interface MTU in the test scenario is the default one, 1500.

>It seems to me to only clean solution is to set tap0 MTU so that it accounts for the
>bridge encap overhead.

This will force _ALL_ deploy instances requiring tap0 MTU change in every cloud env.

Current behavior leads over-mtu-sized packet push down to NIC, which should not
happen anyway. And as I putted in another threads:
Perform GSO for skb, then try to do ip segmentation if possible, If DF set, send back
ICMP message. If DF is not set, apparently user want stack do ip segmentation, and
All the GSO-ed skb will be sent out correctly as expected.


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ