[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <201408221801508088964@sangfor.com>
Date: Fri, 22 Aug 2014 18:01:52 +0800
From: "Zhang Haoyu" <zhanghy@...gfor.com>
To: "Zhang Haoyu" <zhanghy@...gfor.com>,
"Razya Ladelsky" <RAZYA@...ibm.com>,
"Christian Borntraeger" <borntraeger@...ibm.com>,
"mashirle" <mashirle@...ibm.com>,
"Jason Wang" <jasowang@...hat.com>,
"Michael S.Tsirkin" <mst@...hat.com>
Cc: "abel.gordon" <abel.gordon@...il.com>,
"Alex Glikson" <GLIKSON@...ibm.com>,
"Eran Raichstein" <ERANRA@...ibm.com>,
"Joel Nider" <JOELN@...ibm.com>, "kvm" <kvm@...r.kernel.org>,
"linux-kernel" <linux-kernel@...r.kernel.org>,
"mst" <mst@...hat.com>, "netdev" <netdev@...r.kernel.org>,
"virtualization" <virtualization@...ts.linux-foundation.org>,
"Yossi Kuperman1" <YOSSIKU@...ibm.com>
Subject: Re: [PATCH] vhost: Add polling mode
>>> >
>>> > Results:
>>> >
>>> > Netperf, 1 vm:
>>> > The polling patch improved throughput by ~33% (1516 MB/sec -> 2046 MB/sec).
>>> > Number of exits/sec decreased 6x.
>>> > The same improvement was shown when I tested with 3 vms running netperf
>>> > (4086 MB/sec -> 5545 MB/sec).
>>> >
>>> > filebench, 1 vm:
>>> > ops/sec improved by 13% with the polling patch. Number of exits
>>> was reduced by
>>> > 31%.
>>> > The same experiment with 3 vms running filebench showed similar numbers.
>>> >
>>> > Signed-off-by: Razya Ladelsky <razya@...ibm.com>
>>>
>>> Gave it a quick try on s390/kvm. As expected it makes no difference
>>> for big streaming workload like iperf.
>>> uperf with a 1-1 round robin got indeed faster by about 30%.
>>> The high CPU consumption is something that bothers me though, as
>>> virtualized systems tend to be full.
>>>
>>>
>>
>>Thanks for confirming the results!
>>The best way to use this patch would be along with a shared vhost thread
>>for multiple
>>devices/vms, as described in:
>>http://domino.research.ibm.com/library/cyberdig.nsf/1e4115aea78b6e7c85256b360066f0d4/479e3578ed05bfac85257b4200427735!OpenDocument
>>This work assumes having a dedicated I/O core where the vhost thread
>>serves multiple vms, which
>>makes the high cpu utilization less of a concern.
>>
>Hi, Razya, Shirley
>I am going to test the combination of
>"several (depends on total number of cpu on host, e.g., total_number * 1/3) vhost threads server all VMs" and "vhost: add polling mode",
>now I get the patch "http://thread.gmane.org/gmane.comp.emulators.kvm.devel/88682/focus=88723" posted by Shirley,
>any update to this patch?
>
>And, I want to make a bit change on this patch, create total_cpu_number * 1/N(N={3,4}) vhost threads instead of per-cpu vhost thread to server all VMs,
Just like xen netback threads, whose number is equal to num_online_cpus on Dom0,
but for kvm host, I think per-cpu vhost thread is too many.
>any ideas?
>
>Thanks,
>Zhang Haoyu
>>
>>
>>> > +static int poll_start_rate = 0;
>>> > +module_param(poll_start_rate, int, S_IRUGO|S_IWUSR);
>>> > +MODULE_PARM_DESC(poll_start_rate, "Start continuous polling of
>>> virtqueue when rate of events is at least this number per jiffy. If
>>> 0, never start polling.");
>>> > +
>>> > +static int poll_stop_idle = 3*HZ; /* 3 seconds */
>>> > +module_param(poll_stop_idle, int, S_IRUGO|S_IWUSR);
>>> > +MODULE_PARM_DESC(poll_stop_idle, "Stop continuous polling of
>>> virtqueue after this many jiffies of no work.");
>>>
>>> This seems ridicoudly high. Even one jiffie is an eternity, so
>>> setting it to 1 as a default would reduce the CPU overhead for most cases.
>>> If we dont have a packet in one millisecond, we can surely go back
>>> to the kick approach, I think.
>>>
>>> Christian
>>>
>>
>>Good point, will reduce it and recheck.
>>Thank you,
>>Razya
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists