[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4E32C79A.80606@gmail.com>
Date: Fri, 29 Jul 2011 22:45:46 +0800
From: Liu Yuan <namei.unix@...il.com>
To: Stefan Hajnoczi <stefanha@...il.com>
CC: "Michael S. Tsirkin" <mst@...hat.com>,
Rusty Russell <rusty@...tcorp.com.au>,
Avi Kivity <avi@...hat.com>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org, Khoa Huynh <khoa@...ibm.com>,
Badari Pulavarty <pbadari@...ibm.com>,
Christoph Hellwig <hch@...radead.org>
Subject: Re: [RFC PATCH]vhost-blk: In-kernel accelerator for virtio block
device
On 07/29/2011 08:50 PM, Stefan Hajnoczi wrote:
> I hit a weirdness yesterday, just want to mention it in case you notice it too.
>
> When running vanilla qemu-kvm I forgot to use aio=native. When I
> compared the results against virtio-blk-data-plane (which *always*
> uses Linux AIO) I was surprised to find average 4k read latency was
> lower and the standard deviation was also lower.
>
> So from now on I will run tests both with and without aio=native.
> aio=native should be faster and if I can reproduce the reverse I'll
> try to figure out why.
>
> Stefan
On my laptop, I don't meet this weirdo. the emulated POSIX AIO is much
worse than the Linux AIO as expected. If iodepth goes deeper, the gap
gets wider.
If not set aio=none, qemu uses emulated posix aio interface to do the
IO. I peek at the posix-aio-compat.c,it uses thread pool and sync
preadv/pwritev to emulate the AIO behaviour. The sync IO interface would
even cause much poorer performance for random rw, since io-scheduler
would possibly never get a chance to merge the requests stream.
(blk_finish_plug->queue_unplugged->__blk_run_queue)
Yuan
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists