lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 30 Jul 2010 16:53:03 +0800
From:	"Xin, Xiaohui" <xiaohui.xin@...el.com>
To:	Shirley Ma <mashirle@...ibm.com>
CC:	"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
	"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"mst@...hat.com" <mst@...hat.com>, "mingo@...e.hu" <mingo@...e.hu>,
	"davem@...emloft.net" <davem@...emloft.net>,
	"herbert@...dor.apana.org.au" <herbert@...dor.apana.org.au>,
	"jdike@...ux.intel.com" <jdike@...ux.intel.com>
Subject: RE: [RFC PATCH v8 00/16] Provide a zero-copy method on KVM
 virtio-net.

>Hello Xiaohui,
>
>On Thu, 2010-07-29 at 19:14 +0800, xiaohui.xin@...el.com wrote:
>> The idea is simple, just to pin the guest VM user space and then
>> let host NIC driver has the chance to directly DMA to it.
>> The patches are based on vhost-net backend driver. We add a device
>> which provides proto_ops as sendmsg/recvmsg to vhost-net to
>> send/recv directly to/from the NIC driver. KVM guest who use the
>> vhost-net backend may bind any ethX interface in the host side to
>> get copyless data transfer thru guest virtio-net frontend.
>
>Since vhost-net already supports macvtap/tun backends, do you think
>whether it's better to implement zero copy in macvtap/tun than inducing
>a new media passthrough device here?
>

I'm not sure if there will be more duplicated code in the kernel.

>> Our goal is to improve the bandwidth and reduce the CPU usage.
>> Exact performance data will be provided later.
>
>I did some vhost performance measurement over 10Gb ixgbe, and found that
>in order to get consistent BW results, netperf/netserver, qemu, vhost
>threads smp affinities are required.
>
>Looking forward to these results for small message size comparison. For
>large message size 10Gb ixgbe BW already reached by doing vhost smp
>affinity w/i offloading support, we will see how much CPU utilization it
>can be reduced.
>
>Please provide latency results as well. I did some experimental on
>macvtap zero copy sendmsg, what I have found that get_user_pages latency
>pretty high.
>
Ok, I will try that.

>Thanks
>Shirley
>
>
>

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ