lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 03 Mar 2014 10:13:22 +0100
From:	Christian Borntraeger <borntraeger@...ibm.com>
To:	Vlad Yasevich <vyasevich@...il.com>, vyasevic@...hat.com
CC:	"David S. Miller" <davem@...emloft.net>,
	Jason Wang <jasowang@...hat.com>,
	"Michael S. Tsirkin" <mst@...hat.com>, netdev@...r.kernel.org,
	KVM list <kvm@...r.kernel.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: macvtap performance regression (bisected) between 3.13 and 3.14-rc1

On 02/03/14 02:21, Vlad Yasevich wrote:
> On 03/01/2014 02:27 PM, Vlad Yasevich wrote:
>> On 03/01/2014 06:15 AM, Christian Borntraeger wrote:
>>> On 28/02/14 23:14, Vlad Yasevich wrote:
>>>> On 02/27/2014 03:52 PM, Christian Borntraeger wrote:
>>>>> Vlad,
>>>>>
>>>>> commit 6acf54f1cf0a6747bac9fea26f34cfc5a9029523
>>>>>     macvtap: Add support of packet capture on macvtap device.
>>>>>
>>>>> causes a performance regression for iperf traffic between two KVM guests
>>>>> on my s390 system. Both guests are connected via two macvtaps on the same OSA
>>>>> network card.
>>>>> Before that patch I get ~20 Gbit/sec between two guests, afterwards I get
>>>>> ~4Gbit/sec
>>>>>
>>>>> Latency seems to be unchanges (uperf 1byte ping pong).
>>>>>
>>>>> According to ifconfig in the guest, I have ~ 1500 bytes per packet with this
>>>>> patch and ~  40000 bytes without. So for some reason this patch causes the
>>>>> network stack to do segmentation. (the guest kernel stays the same, only host 
>>>>> kernel is changed).
>>>>>
>>>>> Any ideas?
>>>>
>>>> I am looking.  It shouldn't cause addition segmentations and when I ran
>>>> netperf on the code I didn't see any difference in the throughput.
>>>
>>> Dont know if the different bytes/packets ratio is really the reason or
>>> just a side effect. As a hint: the underlying network device does not support
>>> segmentation, but this should not matter for traffic between to guests.
>>
>> Could you post 'ethtool -k' output for both lower-level device and the
>> macvtap device?
>>
>> Thanks
>> -vlad
>>
> 
> Ok.  I think I see what's happening.  Since you turn off offloads on
> lower device, that's propagated to macvlan device.  As a result, when
> when we call dev_queue_xmit on the vlan->dev, we end up segmenting since
> lower level says it does support segmentation.
> 
> One way to fix this is to never disable offloads on macvlan.  macvlan
> will always try to use __dev_queue_xmit() with it's lower device, so any
> segmentation can happen there.

If you have anything that I should test, let me know.

Christian

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ