[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <OFB79DEF3C.1ADC0F14-ONC2257D39.002F0F0A-C2257D39.002F3CBC@il.ibm.com>
Date: Tue, 19 Aug 2014 11:36:31 +0300
From: Razya Ladelsky <RAZYA@...ibm.com>
To: "Michael S. Tsirkin" <mst@...hat.com>
Cc: abel.gordon@...il.com, Alex Glikson <GLIKSON@...ibm.com>,
David Miller <davem@...emloft.net>,
Eran Raichstein <ERANRA@...ibm.com>,
Joel Nider <JOELN@...ibm.com>, kvm@...r.kernel.org,
kvm-owner@...r.kernel.org, linux-kernel@...r.kernel.org,
netdev@...r.kernel.org, virtualization@...ts.linux-foundation.org,
Yossi Kuperman1 <YOSSIKU@...ibm.com>
Subject: Re: [PATCH] vhost: Add polling mode
> That was just one example. There many other possibilities. Either
> actually make the systems load all host CPUs equally, or divide
> throughput by host CPU.
>
The polling patch adds this capability to vhost, reducing costly exit
overhead when the vm is loaded.
In order to load the vm I ran netperf with msg size of 256:
Without polling: 2480 Mbits/sec, utilization: vm - 100% vhost - 64%
With Polling: 4160 Mbits/sec, utilization: vm - 100% vhost - 100%
Therefore, throughput/cpu without polling is 15.1, and 20.8 with polling.
My intention was to load vhost as close as possible to 100% utilization
without polling, in order to compare it to the polling utilization case
(where vhost is always 100%).
The best use case, of course, would be when the shared vhost thread work
(TBD) is integrated and then vhost will actually be using its polling
cycles to handle requests of multiple devices (even from multiple vms).
Thanks,
Razya
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists