[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1283545797.14478.5.camel@w-sridhar.beaverton.ibm.com>
Date: Fri, 03 Sep 2010 13:29:57 -0700
From: Sridhar Samudrala <sri@...ibm.com>
To: "Michael S. Tsirkin" <mst@...hat.com>
Cc: Shirley Ma <mashirle@...ibm.com>, xiaohui.xin@...el.com,
netdev@...r.kernel.org, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org, mingo@...e.hu, davem@...emloft.net,
herbert@...dor.hengli.com.au, jdike@...ux.intel.com
Subject: Re: [RFC PATCH v9 00/16] Provide a zero-copy method on KVM
virtio-net.
On Fri, 2010-09-03 at 13:14 +0300, Michael S. Tsirkin wrote:
> On Tue, Aug 10, 2010 at 06:23:24PM -0700, Shirley Ma wrote:
> > Hello Xiaohui,
> >
> > On Fri, 2010-08-06 at 17:23 +0800, xiaohui.xin@...el.com wrote:
> > > Our goal is to improve the bandwidth and reduce the CPU usage.
> > > Exact performance data will be provided later.
> >
> > Have you had any performance data to share here? I tested my
> > experimental macvtap zero copy for TX only. The performance I have seen
> > as below without any tuning, (default setting):
> >
> > Before: netperf 16K message size results with 60 secs run is 7.5Gb/s
> > over ixgbe 10GbE card. perf top shows:
> >
> > 2103.00 12.9% copy_user_generic_string
> > 1541.00 9.4% handle_tx
> > 1490.00 9.1% _raw_spin_unlock_irqrestore
> > 1361.00 8.3% _raw_spin_lock_irqsave
> > 1288.00 7.9% _raw_spin_lock
> > 924.00 5.7% vhost_worker
> >
> > After: netperf results with 60 secs run is 8.1Gb/s, perf output:
> >
> > 1093.00 9.9% _raw_spin_unlock_irqrestore
> > 1048.00 9.5% handle_tx
> > 934.00 8.5% _raw_spin_lock_irqsave
> > 864.00 7.9% _raw_spin_lock
> > 644.00 5.9% vhost_worker
> > 387.00 3.5% use_mm
> >
> > I am still working on collecting more data (latency, cpu
> > utilization...). I will let you know once I get all data for macvtap TX
> > zero copy. Also I found some vhost performance regression on the new
> > kernel with tuning. I used to get 9.4Gb/s, now I couldn't get it.
> >
> > Shirley
>
> Could you please try disabling mergeable buffers, and see if this gets
> you back where you were?
> -global virtio-net-pci.mrg_rxbuf=off
I don't think Shirley had mergeable buffers on when she ran these tests.
The qemu patch to support mergeable buffers with vhost is not yet upstream.
Shirley is on vacation and will be back on Sept 7 and can provide more
detailed performance data and post her patch.
Thanks
Sridhar
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists