[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <OF9D20825B.8FFA2829-ONC2257D3B.004A777F-C2257D3B.004C47BC@il.ibm.com>
Date: Thu, 21 Aug 2014 16:53:47 +0300
From: Razya Ladelsky <RAZYA@...ibm.com>
To: Christian Borntraeger <borntraeger@...ibm.com>
Cc: abel.gordon@...il.com, Alex Glikson <GLIKSON@...ibm.com>,
Eran Raichstein <ERANRA@...ibm.com>,
Joel Nider <JOELN@...ibm.com>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org, mst@...hat.com,
netdev@...r.kernel.org, virtualization@...ts.linux-foundation.org,
Yossi Kuperman1 <YOSSIKU@...ibm.com>
Subject: Re: [PATCH] vhost: Add polling mode
Christian Borntraeger <borntraeger@...ibm.com> wrote on 20/08/2014
11:41:32 AM:
> >
> > Results:
> >
> > Netperf, 1 vm:
> > The polling patch improved throughput by ~33% (1516 MB/sec -> 2046
MB/sec).
> > Number of exits/sec decreased 6x.
> > The same improvement was shown when I tested with 3 vms running
netperf
> > (4086 MB/sec -> 5545 MB/sec).
> >
> > filebench, 1 vm:
> > ops/sec improved by 13% with the polling patch. Number of exits
> was reduced by
> > 31%.
> > The same experiment with 3 vms running filebench showed similar
numbers.
> >
> > Signed-off-by: Razya Ladelsky <razya@...ibm.com>
>
> Gave it a quick try on s390/kvm. As expected it makes no difference
> for big streaming workload like iperf.
> uperf with a 1-1 round robin got indeed faster by about 30%.
> The high CPU consumption is something that bothers me though, as
> virtualized systems tend to be full.
>
>
Thanks for confirming the results!
The best way to use this patch would be along with a shared vhost thread
for multiple
devices/vms, as described in:
http://domino.research.ibm.com/library/cyberdig.nsf/1e4115aea78b6e7c85256b360066f0d4/479e3578ed05bfac85257b4200427735!OpenDocument
This work assumes having a dedicated I/O core where the vhost thread
serves multiple vms, which
makes the high cpu utilization less of a concern.
> > +static int poll_start_rate = 0;
> > +module_param(poll_start_rate, int, S_IRUGO|S_IWUSR);
> > +MODULE_PARM_DESC(poll_start_rate, "Start continuous polling of
> virtqueue when rate of events is at least this number per jiffy. If
> 0, never start polling.");
> > +
> > +static int poll_stop_idle = 3*HZ; /* 3 seconds */
> > +module_param(poll_stop_idle, int, S_IRUGO|S_IWUSR);
> > +MODULE_PARM_DESC(poll_stop_idle, "Stop continuous polling of
> virtqueue after this many jiffies of no work.");
>
> This seems ridicoudly high. Even one jiffie is an eternity, so
> setting it to 1 as a default would reduce the CPU overhead for most
cases.
> If we dont have a packet in one millisecond, we can surely go back
> to the kick approach, I think.
>
> Christian
>
Good point, will reduce it and recheck.
Thank you,
Razya
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists