[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4FF5D2B7.6080602@hp.com>
Date: Thu, 05 Jul 2012 10:45:27 -0700
From: Rick Jones <rick.jones2@...com>
To: Jason Wang <jasowang@...hat.com>
CC: mst@...hat.com, mashirle@...ibm.com, krkumar2@...ibm.com,
habanero@...ux.vnet.ibm.com, rusty@...tcorp.com.au,
netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
virtualization@...ts.linux-foundation.org, edumazet@...gle.com,
tahm@...ux.vnet.ibm.com, jwhan@...ewood.snu.ac.kr,
davem@...emloft.net, akong@...hat.com, kvm@...r.kernel.org,
sri@...ibm.com
Subject: Re: [net-next RFC V5 0/5] Multiqueue virtio-net
On 07/05/2012 03:29 AM, Jason Wang wrote:
>
> Test result:
>
> 1) 1 vm 2 vcpu 1q vs 2q, 1 - 1q, 2 - 2q, no pinning
>
> - Guest to External Host TCP STREAM
> sessions size throughput1 throughput2 norm1 norm2
> 1 64 650.55 655.61 100% 24.88 24.86 99%
> 2 64 1446.81 1309.44 90% 30.49 27.16 89%
> 4 64 1430.52 1305.59 91% 30.78 26.80 87%
> 8 64 1450.89 1270.82 87% 30.83 25.95 84%
Was the -D test-specific option used to set TCP_NODELAY? I'm guessing
from your description of how packet sizes were smaller with multiqueue
and your need to hack tcp_write_xmit() it wasn't but since we don't have
the specific netperf command lines (hint hint :) I wanted to make certain.
Instead of calling them throughput1 and throughput2, it might be more
clear in future to identify them as singlequeue and multiqueue.
Also, how are you combining the concurrent netperf results? Are you
taking sums of what netperf reports, or are you gathering statistics
outside of netperf?
> - TCP RR
> sessions size throughput1 throughput2 norm1 norm2
> 50 1 54695.41 84164.98 153% 1957.33 1901.31 97%
A single instance TCP_RR test would help confirm/refute any non-trivial
change in (effective) path length between the two cases.
happy benchmarking,
rick jones
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists