[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160710231410.14fa44c2@halley>
Date: Sun, 10 Jul 2016 23:14:10 +0300
From: Shmulik Ladkani <shmulik.ladkani@...ellosystems.com>
To: Florian Westphal <fw@...len.de>
Cc: "David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Hannes Frederic Sowa <hannes@...essinduktion.org>,
shmulik.ladkani@...il.com, netdev@...r.kernel.org,
Alexander Duyck <alexander.duyck@...il.com>,
Tom Herbert <tom@...bertland.com>
Subject: Re: [PATCH] net: ip_finish_output_gso: If skb_gso_network_seglen
exceeds MTU, do segmentation even for non IPSKB_FORWARDED skbs
On Sat, 9 Jul 2016 15:30:17 +0300 Shmulik Ladkani <shmulik.ladkani@...ellosystems.com> wrote:
> On Sat, 9 Jul 2016 11:00:20 +0200 Florian Westphal <fw@...len.de> wrote:
> > I am worried about this patch, skb_gso_validate_mtu is more costly than
> > the ->flags & FORWARD check; everyone pays this extra cost.
>
> I can get back with numbers regarding the impact on local traffic.
Florian, I've repeatedly tested how this affects locally generated
traffic and it seems there's no impact (or at least too negligible for
netperf workload to notice):
- veth to veth (netns separated), both UFO enabled
- UDP_RR, with Request Size (udp payload) of 1500, to ensure UFO in place
(sender reaches ip_finish_output_gso)
- Before: stable v4.6.3
- After: stable v4.6.3 + suggested fix (kill flags&IPSKB_FORWARDED check)
# netperf -T1,2 -c -C -L 192.168.13.2 -H 192.168.13.1 -l 20 -I 99,2 -i 10 -t UDP_RR -- -P 10002,10001 -r 1500,1
MIGRATED UDP REQUEST/RESPONSE TEST from 192.168.13.2 () port 10002 AF_INET to 192.168.13.1 () port 10001 AF_INET : +/-1.000% @ 99% conf. : demo : first burst 0 : cpu bind
Local /Remote
Socket Size Request Resp. Elapsed Trans. CPU CPU S.dem S.dem
Send Recv Size Size Time Rate local remote local remote
bytes bytes bytes bytes secs. per sec % S % S us/Tr us/Tr
[before]
212992 212992 1500 1 20.00 45368.10 20.56 20.56 18.131 18.131
[after]
212992 212992 1500 1 20.00 45376.09 20.74 20.74 18.287 18.287
Therefore it seems trying to optimize this by setting IPSKB_FORWARDED
(or introducing a different mark) in 'iptunnel_xmit' would offer no
genuine benefit.
WDYT?
Were you thinking of different workloads that we should assess the
impact on?
Thanks,
Shmulik
Powered by blists - more mailing lists