[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <BANLkTimVG4GVoar6RHYRpdiDngCC+9rHTg@mail.gmail.com>
Date: Fri, 17 Jun 2011 06:00:54 +0100
From: Stefan Hajnoczi <stefanha@...il.com>
To: Sasha Levin <levinsasha928@...il.com>
Cc: Anthony Liguori <anthony@...emonkey.ws>,
Pekka Enberg <penberg@...nel.org>,
linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
Avi Kivity <avi@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Ingo Molnar <mingo@...e.hu>,
Prasad Joshi <prasadjoshi124@...il.com>,
Cyrill Gorcunov <gorcunov@...il.com>,
Asias He <asias.hejun@...il.com>
Subject: Re: [ANNOUNCE] Native Linux KVM tool v2
On Fri, Jun 17, 2011 at 2:03 AM, Sasha Levin <levinsasha928@...il.com> wrote:
> On Thu, 2011-06-16 at 17:50 -0500, Anthony Liguori wrote:
>> On 06/16/2011 09:48 AM, Pekka Enberg wrote:
>> > On Wed, Jun 15, 2011 at 6:53 PM, Pekka Enberg<penberg@...nel.org> wrote:
>> >> - Fast QCOW2 image read-write support beating Qemu in fio benchmarks. See the
>> >> following URL for test result details: https://gist.github.com/1026888
>> >
>> > It turns out we were benchmarking the wrong guest kernel version for
>> > qemu-kvm which is why it performed so much worse. Here's a summary of
>> > qemu-kvm beating tools/kvm:
>> >
>> > https://raw.github.com/gist/1029359/9f9a714ecee64802c08a3455971e410d5029370b/gistfile1.txt
>> >
>> > I'd ask for a brown paper bag if I wasn't so busy eating my hat at the moment.
>>
>> np, it happens.
>>
>> Is that still with QEMU with IDE emulation, cache=writethrough, and
>> 128MB of guest memory?
>>
>> Does your raw driver support multiple parallel requests? It doesn't
>> look like it does from how I read the code. At some point, I'd be happy
>> to help ya'll do some benchmarking against QEMU.
>>
>
> Each virtio-blk device can process requests regardless of other
> virtio-blk devices, which means that we can do parallel requests for
> devices.
>
> Within each device, we support parallel requests in the sense that we do
> vectored IO for each head (which may contain multiple blocks) in the
> vring, we don't do multiple heads because when I've tried adding AIO
> I've noticed that at most there are 2-3 possible heads - and since it
> points to the same device it doesn't really help running them in
> parallel.
One thing that QEMU does but I'm a little suspicious of is request
merging. virtio-blk will submit those 2-3 heads using
bdrv_aio_multiwrite() if they become available in the same virtqueue
notify. The requests will be merged if possible.
My feeling is that we should already have merged requests coming
through virtio-blk and there should be no need to do any merging -
which could be a workaround for a poor virtio-blk vring configuration
that prevented the guest from sending large requests. However, this
feature did yield performance improvements with qcow2 image files when
it was introduced, so that would be interesting to look at.
Are you enabling indirect descriptors on the virtio-blk vring? That
should allow more requests to be made available because you don't run
out of vring descriptors so easily.
Stefan
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists