[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4E32F7F2.4080607@us.ibm.com>
Date: Fri, 29 Jul 2011 11:12:02 -0700
From: Badari Pulavarty <pbadari@...ibm.com>
To: Liu Yuan <namei.unix@...il.com>
CC: Stefan Hajnoczi <stefanha@...il.com>,
"Michael S. Tsirkin" <mst@...hat.com>,
Rusty Russell <rusty@...tcorp.com.au>,
Avi Kivity <avi@...hat.com>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org, Khoa Huynh <khoa@...ibm.com>
Subject: Re: [RFC PATCH]vhost-blk: In-kernel accelerator for virtio block
device
Hi Liu Yuan,
I am glad to see that you started looking at vhost-blk. I did an
attempt year ago to improve block
performance using vhost-blk approach.
http://lwn.net/Articles/379864/
http://lwn.net/Articles/382543/
I will take a closer look at your patchset to find differences and
similarities.
- I focused on using vfs interfaces in the kernel, so that I can use it
for file-backed devices.
Our use-case scenario is mostly file-backed images.
- In few cases, virtio-blk did outperform vhost-blk -- which was counter
intuitive - but
couldn't exactly nail down. why ?
- I had to implement my own threads for parellism. I see that you are
using aio infrastructure
to get around it.
- In our high scale performance testing, what we found is block-backed
device performance is
pretty close to bare-metal (91% of bare-metal). vhost-blk didn't add any
major benefits to it.
I am curious on your performance analysis & data on where you see the
gains and why ?
Hence I prioritized my work low :(
Now that you are interested in driving this, I am happy to work with you
and see what
vhost-blk brings to the tables. (even if helps us improve virtio-blk).
Thanks,
Badari
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists