[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <503E2F27.5060904@gmail.com>
Date: Wed, 29 Aug 2012 17:03:03 +0200
From: Sasha Levin <levinsasha928@...il.com>
To: "Michael S. Tsirkin" <mst@...hat.com>
CC: rusty@...tcorp.com.au, virtualization@...ts.linux-foundation.org,
linux-kernel@...r.kernel.org, avi@...hat.com, kvm@...r.kernel.org
Subject: Re: [PATCH v2 2/2] virtio-ring: Allocate indirect buffers from cache
when possible
On 08/29/2012 01:07 PM, Michael S. Tsirkin wrote:
> On Tue, Aug 28, 2012 at 03:35:00PM +0200, Sasha Levin wrote:
>> On 08/28/2012 03:20 PM, Michael S. Tsirkin wrote:
>>> On Tue, Aug 28, 2012 at 03:04:03PM +0200, Sasha Levin wrote:
>>>> Currently if VIRTIO_RING_F_INDIRECT_DESC is enabled we will
>>>> use indirect descriptors and allocate them using a simple
>>>> kmalloc().
>>>>
>>>> This patch adds a cache which will allow indirect buffers under
>>>> a configurable size to be allocated from that cache instead.
>>>>
>>>> Signed-off-by: Sasha Levin <levinsasha928@...il.com>
>>>
>>> I imagine this helps performance? Any numbers?
>>
>> I ran benchmarks on the original RFC, I've re-tested it now and got similar
>> numbers to the original ones (virtio-net using vhost-net, thresh=16):
>>
>> Before:
>> Recv Send Send
>> Socket Socket Message Elapsed
>> Size Size Size Time Throughput
>> bytes bytes bytes secs. 10^6bits/sec
>>
>> 87380 16384 16384 10.00 4512.12
>>
>> After:
>> Recv Send Send
>> Socket Socket Message Elapsed
>> Size Size Size Time Throughput
>> bytes bytes bytes secs. 10^6bits/sec
>>
>> 87380 16384 16384 10.00 5399.18
>>
>>
>> Thanks,
>> Sasha
>
> This is with both patches 1 + 2?
> Sorry could you please also test what happens if you apply
> - just patch 1
> - just patch 2
>
> Thanks!
Sure thing!
I've also re-ran it on a IBM server type host instead of my laptop. Here are the
results:
Vanilla kernel:
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.33.1
() port 0 AF_INET
enable_enobufs failed: getprotobyname
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
87380 16384 16384 10.00 7922.72
Patch 1, with threshold=16:
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.33.1
() port 0 AF_INET
enable_enobufs failed: getprotobyname
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
87380 16384 16384 10.00 8415.07
Patch 2:
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.33.1
() port 0 AF_INET
enable_enobufs failed: getprotobyname
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
87380 16384 16384 10.00 8931.05
Note that these are simple tests with netperf listening on one end and a simple
'netperf -H [host]' within the guest. If there are other tests which may be
interesting please let me know.
Thanks,
Sasha
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists