[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <606842712.20130107151133@eikelenboom.it>
Date: Mon, 7 Jan 2013 15:11:33 +0100
From: Sander Eikelenboom <linux@...elenboom.it>
To: Ian Campbell <Ian.Campbell@...rix.com>
CC: Rick Jones <rick.jones2@...com>,
Eric Dumazet <erdnetdev@...il.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>,
annie li <annie.li@...cle.com>,
"xen-devel@...ts.xensource.com" <xen-devel@...ts.xensource.com>
Subject: Re: [PATCH] xen/netfront: improve truesize tracking
Monday, January 7, 2013, 2:41:03 PM, you wrote:
> On Thu, 2013-01-03 at 20:40 +0000, Sander Eikelenboom wrote:
>> Friday, December 21, 2012, 7:33:43 PM, you wrote:
>>
>> > I'm guessing that trusize checks matter more on the "inbound" path than
>> > the outbound path? If that is indeed the case, then instead of, or in
>> > addition to using the -s option to set the local (netperf side) socket
>> > buffer size, you should use a -S option to set the remote (netserver
>> > side) socket buffer size.
>>
>> > happy benchmarking,
>>
>> > rick jones
>>
>>
>> OK, ran them with -S as well:
> Are these all from domU -> dom0 ? Did you try traffic going the other
> way?
Yes running netperf in domU and netserver in dom0, but i must say i'm far from a netperf expert.
So i don't even know for sure if the tests i ran give a good picture.
>> "current" is with netfront as is (skb->truesize += skb->data_len - RX_COPY_THRESHOLD;)
>> "patched" is with IanC's latest patch (skb->truesize += PAGE_SIZE * skb_shinfo(skb)->nr_frags;)
skb->>truesize += skb->data_len - NETFRONT_SKB_CB(skb)->pull_to; is
> probably more interesting to compare against since we know the current
> one is buggy.
Will see if i can run against that as well, although i thought Eric said to prefer the "skb->truesize += PAGE_SIZE * skb_shinfo(skb)->nr_frags;"
> These number generally look good, largely +/- 1%, often in favour of the
> updated code but these two stand out as worrying:
>> TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.1.1 (192.168.1.1) port 0 AF_INET : +/-2.500% @ 95% conf. : demo
>> Recv Send Send
>> Socket Socket Message Elapsed
>> Size Size Size Time Throughput
>> bytes bytes bytes secs. KBytes/sec
>>
>> current 18000 16384 1432 60.00 37559.94
>> patched 18000 16384 1432 60.00 40630.66
>>
>> TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.1.1 (192.168.1.1) port 0 AF_INET : +/-2.500% @ 95% conf. : demo
>> Recv Send Send
>> Socket Socket Message Elapsed
>> Size Size Size Time Throughput
>> bytes bytes bytes secs. KBytes/sec
>>
>> current 28000 16384 16384 60.00 103766.68
>> patched 28000 16384 16384 60.00 93277.98
> That's at least a 10% slow down in both cases.
>> UDP UNIDIRECTIONAL SEND TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.1.1 (192.168.1.1) port 0 AF_INET : +/-2.500% @ 95% conf. : demo
>> Socket Message Elapsed Messages
>> Size Size Time Okay Errors Throughput
>> bytes bytes secs # # KBytes/sec
>>
>> current 212992 65507 60.00 252586 0 269305.73
>> current 2280 60.00 229371 244553.96
>> patched 212992 65507 60.00 256209 0 273168.32
>> patched 2280 60.00 201670 215019.54
> The recv numbers here aren't too pleasing either.
> However, given that this fixes a real issue which people are seeing I'd
> be inclined to go with it, at least for now.
> Ian.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists