lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 23 Sep 2013 20:32:27 +0000
From:	Anirban Chakraborty <abchak@...iper.net>
To:	annie li <annie.li@...cle.com>
CC:	Jason Wang <jasowang@...hat.com>, Wei Liu <wei.liu2@...rix.com>,
	"<netdev@...r.kernel.org>" <netdev@...r.kernel.org>,
	Ian Campbell <ian.campbell@...rix.com>,
	"<xen-devel@...ts.xen.org>" <xen-devel@...ts.xen.org>
Subject: Re: [Xen-devel] [PATCH net-next] xen-netfront: convert to GRO API
 and advertise this feature


On Sep 22, 2013, at 11:22 PM, annie li <annie.li@...cle.com> wrote:

> 
> On 2013-9-23 13:02, Jason Wang wrote:
>> On 09/23/2013 07:04 AM, Anirban Chakraborty wrote:
>>> On Sep 22, 2013, at 5:09 AM, Wei Liu <wei.liu2@...rix.com> wrote:
>>> 
>>>> On Sun, Sep 22, 2013 at 02:29:15PM +0800, Jason Wang wrote:
>>>>> On 09/22/2013 12:05 AM, Wei Liu wrote:
>>>>>> Anirban was seeing netfront received MTU size packets, which downgraded
>>>>>> throughput. The following patch makes netfront use GRO API which
>>>>>> improves throughput for that case.
>>>>>> 
>>>>>> Signed-off-by: Wei Liu <wei.liu2@...rix.com>
>>>>>> Signed-off-by: Anirban Chakraborty <abchak@...iper.net>
>>>>>> Cc: Ian Campbell <ian.campbell@...rix.com>
>>>>> Maybe a dumb question: doesn't Xen depends on the driver of host card to
>>>>> do GRO and pass it to netfront? What the case that netfront can receive
>>>> The would be the ideal situation. Netback pushes large packets to
>>>> netfront and netfront sees large packets.
>>>> 
>>>>> a MTU size packet, for a card that does not support GRO in host? Doing
>>>> However Anirban saw the case when backend interface receives large
>>>> packets but netfront sees MTU size packets, so my thought is there is
>>>> certain configuration that leads to this issue. As we cannot tell
>>>> users what to enable and what not to enable so I would like to solve
>>>> this within our driver.
>>>> 
>>>>> GRO twice may introduce extra overheads.
>>>>> 
>>>> AIUI if the packet that frontend sees is large already then the GRO path
>>>> is quite short which will not introduce heavy penalty, while on the
>>>> other hand if packet is segmented doing GRO improves throughput.
>>>> 
>>> Thanks Wei, for explaining and submitting the patch. I would like add following to what you have already mentioned.
>>> In my configuration, I was seeing netback was pushing large packets to the guest (Centos 6.4) but the netfront was receiving MTU sized packets. With this patch on, I do see large packets received on the guest interface. As a result there was substantial throughput improvement in the guest side (2.8 Gbps to 3.8 Gbps). Also, note that the host NIC driver was enabled for GRO already.
>>> 
>>> -Anirban
>> In this case, even if you still want to do GRO. It's better to find the
>> root cause of why the GSO packet were segmented
> 
> Totally agree, we need to find the cause why large packets is segmented only in different host case.

It appears (from looking at the netback code) that although GSO is turned on at the netback, the guest receives large packet:
1. if it is a local packet (vm to vm on the same host), in which case netfront does a LRO or,
2. via turning on GRO explicitly (with this patch).

-Anirban
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ