lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 03 Feb 2015 15:08:58 +0800
From:	Fan Du <fengyuleidian0615@...il.com>
To:	alexander.h.duyck@...hat.com
CC:	Fan Du <fan.du@...el.com>, bhutchings@...arflare.com,
	davem@...emloft.net, netdev@...r.kernel.org
Subject: Re: [PATCHv2 net] net: restore lro after device detached from bridge

于 2015年02月02日 18:35, Alexander Duyck 写道:
> On 02/01/2015 06:20 PM, Fan Du wrote:
>> 于 2015年01月31日 04:48, Alexander Duyck 写道:
>>> On 01/30/2015 04:33 AM, Fan Du wrote:
>>>> Either detaching a device from bridge or switching a device
>>>> out of FORWARDING state, the original lro feature should
>>>> possibly be enabled for good reason, e.g. hw feature like
>>>> receive side coalescing could come into play.
>>>>
>>>> BEFORE:
>>>> echo 1 > /proc/sys/net/ipv4/conf/ens806f0/forwarding && ethtool -k ens806f0 | grep large
>>>> large-receive-offload: off
>>>>
>>>> echo 0 > /proc/sys/net/ipv4/conf/ens806f0/forwarding && ethtool -k ens806f0 | grep large
>>>> large-receive-offload: off
>>>>
>>>> AFTER:
>>>> echo 1 > /proc/sys/net/ipv4/conf/ens806f0/forwarding && ethtool -k ens806f0 | grep large
>>>> large-receive-offload: off
>>>>
>>>> echo 0 > /proc/sys/net/ipv4/conf/ens806f0/forwarding && ethtool -k ens806f0 | grep large
>>>> large-receive-offload: on
>>>>
>>>> Signed-off-by: Fan Du <fan.du@...el.com>
>>>> Fixes: 0187bdfb0567 ("net: Disable LRO on devices that are forwarding")
>>>
>>
>>> First off this isn't a "fix".  This is going to likely break more than
>>> it fixes.  The main reason why LRO is disabled is because it can cause
>>> more harm then it helps.  Since GRO is available we should err on the
>>> side of caution since enabling LRO/RSC can have undesirable side effects
>>> in a number of cases.
>>
>> I think you are talking about bad scenarios when net device is attached to a bridge.
>> Then what's the good reason user has to pay extra cpu power for using GRO, instead
>> of using hw capable LRO/RSC when this net device is detached from bridge acting as
>> a standalone NIC?
>>
>> Note, SRC is defaulted to *ON* in practice for ALL ixgbe NICs, as same other RSC capable
>> NICs. Attaching net device to a bridge _once_ should not changed its default configuration,
>> moreover it's a subtle change without any message that user won't noticed at all.

> No, RSC only has benefits for IPv4/TCP large packets.  However
> historically there have been issues seen w/ small packet performance
> with RSC enabled.

Only when parallel client exceeds 4, gro trumps lro performance on my testbed for small packets.
The difference comes from the fact that TCP RSS hash flows from clients into different NIC queues
for multiple cpu, while RSC engine inside NIC has limit resource compared with that of cpu used by gro.

NICs: 82599EB
server:ipserf -s -B 192.168.5.1
client:iperf  -c 192.168.5.1 -i 1 -M 100 -P x

-P   Bandwidth/lro on        Bandwidth/lro off
                gro off                 gro on

1     2.31 Gbits/sec           947 Mbits/sec
2     3.09 Gbits/sec          1.97 Gbits/sec
3     3.19 Gbits/sec          2.70 Gbits/sec
4     3.16 Gbits/sec          3.39 Gbits/sec
5     3.23 Gbits/sec          3.33 Gbits/sec
6     3.19 Gbits/sec          3.74 Gbits/sec
7     3.18 Gbits/sec          3.88 Gbits/sec
8     3.17 Gbits/sec          3.24 Gbits/sec
9     3.16 Gbits/sec          3.70 Gbits/sec
10    3.15 Gbits/sec          3.76 Gbits/sec
11    3.10 Gbits/sec          4.03 Gbits/sec
12    3.11 Gbits/sec          3.13 Gbits/sec
13    3.12 Gbits/sec          4.12 Gbits/sec
14    3.07 Gbits/sec          4.04 Gbits/sec
15    3.03 Gbits/sec          3.14 Gbits/sec
16    2.99 Gbits/sec          3.93 Gbits/sec




Some have been addressed, however there are still
> other effects such as increasing latency for receive unless the push
> flag is set in the frame.
>
> I still say this patch is not valid, even with your changes.  Your
> performance gain doesn't trump the regressions you would be causing on
> other peoples platforms.
>
> I would suggest figuring out why you are seeing issues with routing or
> bridging being enabled and disabled and possibly cleaning up the issue
> via a script rather than trying to modify the kernel to make it take
> care of it for you.
> - Alex

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ