[<prev] [next>] [day] [month] [year] [list]
Message-ID: <CAMou0=xqzcSTxmw_NXjF96Na2RwLB6LDqjGaVFD0gtA=AFsJOg@mail.gmail.com>
Date: Fri, 5 Dec 2014 12:02:18 -0800
From: Nick H <nickkvm@...il.com>
To: netdev@...r.kernel.org
Subject: KVM vs Xen-PV netperf numbers
Hello
Not sure I have the right audience, I have two VM's on similar hosts.
One VM is KVM + virtio + vhost based while other is Xen PV with xen
netfront driver. Running a simple netperf test for 1400 byte packets
on both VM's, I see wider difference between throughput as follows.
Xen-pv throughput comes to :
./netperf -H 10.xx.xx.49 -l 20 -t UDP_STREAM -- -m 1400
MIGRATED UDP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
10.xx.xx.49 (10.xx.xx.49) port 0 AF_INET : demo
Socket Message Elapsed Messages
Size Size Time Okay Errors Throughput
bytes bytes secs # # 10^6bits/sec
212992 1400 20.00 1910549 0 1069.89
262144 20.00 1704789 954.66
whereas KVM virtio number comes to :
./netperf -t UDP_STREAM -l 10 -H 10.xx.xx.49 -- -m 1400
MIGRATED UDP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
10.xx.xx.49 (10.xx.xx.49) port 0 AF_INET : demo
Socket Message Elapsed Messages
Size Size Time Okay Errors Throughput
bytes bytes secs # # 10^6bits/sec
212992 1400 10.00 155060 0 173.65
262144 10.00 155060 173.65
I built a custom kernel where I simply free up the (UDP only) skb in
virtio: xmit_skb() routine and I count how many skb's I have received.
Surprisingly it was not too high either:
./netperf -t UDP_STREAM -l 10 -H 10.xx.xx.49 -- -m 1400
MIGRATED UDP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
10.xx.xx.49 (10.xx.xx.49) port 0 AF_INET : demo
Socket Message Elapsed Messages
Size Size Time Okay Errors Throughput
bytes bytes secs # # 10^6bits/sec
212992 1400 10.00 224792 0 251.74
262144 10.00 0 0.00
1910549 packets pumped in xen pv driver versus 212992 packets pumped
in case of virtio driver. Assuming the data path inside the kernel is
same for both drivers , and I have eliminated virtio's
virtqueue_kick() call by freeing the packet ahead in my experiment,
can all this overhead attributed to system call overhead in case of
KVM+virtio combination ? Anything I am missing ?
The KVM setup is based off:
Linux ubn-nested 3.17.0+ #16 SMP Thu Dec 4 12:00:09 PST 2014 x86_64
x86_64 x86_64 GNU/Linux
Regards
N
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists