[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5872DA217C2FF7488B20897D84F904E7338FD2C5@nkgeml511-mbx.china.huawei.com>
Date: Mon, 20 May 2013 04:22:11 +0000
From: Qinchuanyu <qinchuanyu@...wei.com>
To: Jason Wang <jasowang@...hat.com>
CC: "rusty@...tcorp.com.au" <rusty@...tcorp.com.au>,
"mst@...hat.com" <mst@...hat.com>,
"dhowells@...hat.com" <dhowells@...hat.com>,
"(kvm@...r.kernel.org)" <kvm@...r.kernel.org>,
"(netdev@...r.kernel.org)" <netdev@...r.kernel.org>,
Heguansen <heguansen@...wei.com>
Subject: Re: [PATCH] vhost: get 2% performance improved by reducing
spin_lock race in vhost_work_queue
The patch below is base on
https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git/tree/drivers/vhost/vhost.c?id=refs/tags/next-20130517
Signed-off-by: Chuanyu Qin <qinchuanyu@...wei.com>
--- a/drivers/vhost/vhost.c 2013-05-20 11:47:05.000000000 +0800
+++ b/drivers/vhost/vhost.c 2013-05-20 11:48:24.000000000 +0800
@@ -154,9 +154,10 @@
if (list_empty(&work->node)) {
list_add_tail(&work->node, &dev->work_list);
work->queue_seq++;
+ spin_unlock_irqrestore(&dev->work_lock, flags);
wake_up_process(dev->worker);
- }
- spin_unlock_irqrestore(&dev->work_lock, flags);
+ } else
+ spin_unlock_irqrestore(&dev->work_lock, flags);
}
void vhost_poll_queue(struct vhost_poll *poll)
I did the test by using iperf in 10G environment, the test num as below:
orignal modified
thread_num tp(Gbps) vhost(%) | tp(Gbps) vhost(%)
1 9.59 28.82 | 9.59 27.49
8 9.61 32.92 | 9.62 26.77
64 9.58 46.48 | 9.55 38.99
256 9.6 63.7 | 9.6 52.59
The cost of vhost reduced while the throughput is almost unchanged.
On 05/20/2013 11:06 AM, Qinchuanyu wrote:
> Right now the wake_up_process func is included in spin_lock/unlock, but it could be done outside the spin_lock.
> I have test it with kernel 3.0.27 and guest suse11-sp2, it provide 2%-3% net performance improved.
>
> Signed-off-by: Chuanyu Qin <qinchuanyu@...wei.com>
Make sense to me but need generate a patch against net-next.git or
vhost.git in git.kernel.org.
Btw. How did you test this? Care to share the perf numbers?
Thanks
> mu
> --- a/drivers/vhost/vhost.c 2013-05-20 10:36:30.000000000 +0800
> +++ b/drivers/vhost/vhost.c 2013-05-20 10:36:54.000000000 +0800
> @@ -144,9 +144,10 @@
> if (list_empty(&work->node)) {
> list_add_tail(&work->node, &dev->work_list);
> work->queue_seq++;
> + spin_unlock_irqrestore(&dev->work_lock, flags);
> wake_up_process(dev->worker);
> - }
> - spin_unlock_irqrestore(&dev->work_lock, flags);
> + } else
> + spin_unlock_irqrestore(&dev->work_lock, flags);
> }
>
> void vhost_poll_queue(struct vhost_poll *poll)
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists