[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <F2E9EB7348B8264F86B6AB8151CE2D791BAB307B7E@shsmsx502.ccr.corp.intel.com>
Date: Tue, 3 Aug 2010 16:48:08 +0800
From: "Xin, Xiaohui" <xiaohui.xin@...el.com>
To: Shirley Ma <mashirle@...ibm.com>
CC: "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"mst@...hat.com" <mst@...hat.com>, "mingo@...e.hu" <mingo@...e.hu>,
"davem@...emloft.net" <davem@...emloft.net>,
"herbert@...dor.apana.org.au" <herbert@...dor.apana.org.au>,
"jdike@...ux.intel.com" <jdike@...ux.intel.com>
Subject: RE: [RFC PATCH v8 00/16] Provide a zero-copy method on KVM
virtio-net.
>-----Original Message-----
>From: Shirley Ma [mailto:mashirle@...ibm.com]
>Sent: Friday, July 30, 2010 6:31 AM
>To: Xin, Xiaohui
>Cc: netdev@...r.kernel.org; kvm@...r.kernel.org; linux-kernel@...r.kernel.org;
>mst@...hat.com; mingo@...e.hu; davem@...emloft.net; herbert@...dor.apana.org.au;
>jdike@...ux.intel.com
>Subject: Re: [RFC PATCH v8 00/16] Provide a zero-copy method on KVM virtio-net.
>
>Hello Xiaohui,
>
>On Thu, 2010-07-29 at 19:14 +0800, xiaohui.xin@...el.com wrote:
>> The idea is simple, just to pin the guest VM user space and then
>> let host NIC driver has the chance to directly DMA to it.
>> The patches are based on vhost-net backend driver. We add a device
>> which provides proto_ops as sendmsg/recvmsg to vhost-net to
>> send/recv directly to/from the NIC driver. KVM guest who use the
>> vhost-net backend may bind any ethX interface in the host side to
>> get copyless data transfer thru guest virtio-net frontend.
>
>Since vhost-net already supports macvtap/tun backends, do you think
>whether it's better to implement zero copy in macvtap/tun than inducing
>a new media passthrough device here?
>
>> Our goal is to improve the bandwidth and reduce the CPU usage.
>> Exact performance data will be provided later.
>
>I did some vhost performance measurement over 10Gb ixgbe, and found that
>in order to get consistent BW results, netperf/netserver, qemu, vhost
>threads smp affinities are required.
>
>Looking forward to these results for small message size comparison. For
>large message size 10Gb ixgbe BW already reached by doing vhost smp
>affinity w/i offloading support, we will see how much CPU utilization it
>can be reduced.
>
>Please provide latency results as well. I did some experimental on
>macvtap zero copy sendmsg, what I have found that get_user_pages latency
>pretty high.
>
May you share me with your performance results (including BW and latency)on
vhost-net and how you get them(your configuration and especially with the affinity
settings)?
Thanks
Xiaohui
>Thanks
>Shirley
>
>
>
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists