[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100801083113.GB16158@redhat.com>
Date: Sun, 1 Aug 2010 11:31:13 +0300
From: "Michael S. Tsirkin" <mst@...hat.com>
To: Shirley Ma <mashirle@...ibm.com>
Cc: xiaohui.xin@...el.com, netdev@...r.kernel.org, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org, mingo@...e.hu, davem@...emloft.net,
herbert@...dor.hengli.com.au, jdike@...ux.intel.com
Subject: Re: [RFC PATCH v8 00/16] Provide a zero-copy method on KVM
virtio-net.
On Thu, Jul 29, 2010 at 03:31:22PM -0700, Shirley Ma wrote:
> I did some vhost performance measurement over 10Gb ixgbe, and found that
> in order to get consistent BW results, netperf/netserver, qemu, vhost
> threads smp affinities are required.
Could you provide an example of a good setup?
Specifically, is it a good idea for the vhost thread
to inherit CPU affinities from qemu?
> Looking forward to these results for small message size comparison.
I think we should explore the idea for the driver to fall back on data copy
for small message sizes.
The benefit of zero copy would then be CPU utilization on large messages.
> For
> large message size 10Gb ixgbe BW already reached by doing vhost smp
> affinity w/i offloading support, we will see how much CPU utilization it
> can be reduced.
>
> Please provide latency results as well. I did some experimental on
> macvtap zero copy sendmsg, what I have found that get_user_pages latency
> pretty high.
>
> Thanks
> Shirley
>
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists