[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <52E088B5.8070401@redhat.com>
Date: Thu, 23 Jan 2014 11:12:53 +0800
From: Jason Wang <jasowang@...hat.com>
To: Stefan Hajnoczi <stefanha@...il.com>,
Alejandro Comisario <alejandro.comisario@...cadolibre.com>
CC: kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
"Michael S. Tsirkin" <mst@...hat.com>
Subject: Re: kvm virtio ethernet ring on guest side over high throughput (packet
per second)
On 01/22/2014 11:22 PM, Stefan Hajnoczi wrote:
> On Tue, Jan 21, 2014 at 04:06:05PM -0200, Alejandro Comisario wrote:
>
> CCed Michael Tsirkin and Jason Wang who work on KVM networking.
>
>> Hi guys, we had in the past when using physical servers, several
>> throughput issues regarding the throughput of our APIS, in our case we
>> measure this with packets per seconds, since we dont have that much
>> bandwidth (Mb/s) since our apis respond lots of packets very small
>> ones (maximum response of 3.5k and avg response of 1.5k), when we
>> where using this physical servers, when we reach throughput capacity
>> (due to clients tiemouts) we touched the ethernet ring configuration
>> and we made the problem dissapear.
>>
>> Today with kvm and over 10k virtual instances, when we want to
>> increase the throughput of KVM instances, we bumped with the fact that
>> when using virtio on guests, we have a max configuration of the ring
>> of 256 TX/RX, and from the host side the atached vnet has a txqueuelen
>> of 500.
>>
>> What i want to know is, how can i tune the guest to support more
>> packets per seccond if i know that's my bottleneck?
> I suggest investigating performance in a systematic way. Set up a
> benchmark that saturates the network. Post the details of the benchmark
> and the results that you are seeing.
>
> Then, we can discuss how to investigate the root cause of the bottleneck.
>
>> * does virtio exposes more packets to configure in the virtual ethernet's ring ?
> No, ring size is hardcoded in QEMU (on the host).
Do it make sense to let user can configure it through something at least
like qemu command line?
>
>> * does the use of vhost_net helps me with increasing packets per
>> second and not only bandwidth?
> vhost_net is generally the most performant network option.
>
>> does anyone has to struggle with this before and knows where i can look into ?
>> there's LOOOOOOOOOOOOOOOTS of information about networking performance
>> tuning of kvm, but nothing related to increase throughput in pps
>> capacity.
>>
>> This is a couple of configurations that we are having right now on the
>> compute nodes:
>>
>> * 2x1Gb bonded interfaces (want to know the more than 20 models we are
>> using, just ask for it)
>> * Multi queue interfaces, pined via irq to different cores
Maybe you can have a try with multiqueue virtio-net with vhost. It can
let guest to use more than one tx/rx virtqueue pairs to do the network
processing.
>> * Linux bridges, no VLAN, no open-vswitch
>> * ubuntu 12.04 kernel 3.2.0-[40-48]
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists