[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <500527BA.9000001@redhat.com>
Date: Tue, 17 Jul 2012 10:52:10 +0200
From: Paolo Bonzini <pbonzini@...hat.com>
To: Asias He <asias@...hat.com>
CC: Stefan Hajnoczi <stefanha@...il.com>, linux-kernel@...r.kernel.org,
linux-aio@...ck.org, kvm@...r.kernel.org,
"Michael S. Tsirkin" <mst@...hat.com>,
virtualization@...ts.linux-foundation.org,
Benjamin LaHaise <bcrl@...ck.org>,
Alexander Viro <viro@...iv.linux.org.uk>,
linux-fsdevel@...r.kernel.org
Subject: Re: [PATCH 0/5] Add vhost-blk support
Il 17/07/2012 10:29, Asias He ha scritto:
> So, vhost-blk at least saves ~6 syscalls for us in each request.
Are they really 6? If I/O is coalesced by a factor of 3, for example
(i.e. each exit processes 3 requests), it's really 2 syscalls per request.
Also, is there anything we can improve? Perhaps we can modify epoll and
ask it to clear the eventfd for us (would save 2 reads)? Or
io_getevents (would save 1)?
> I guess you mean qemu here. Yes, in theory, qemu's block layer can be
> improved to achieve similar performance as vhost-blk or kvm tool's
> userspace virito-blk has. But I think it makes no sense to prevent one
> solution becase there is another in theory solution called: we can do
> similar in qemu.
It depends. Like vhost-scsi, vhost-blk has the problem of a crippled
feature set: no support for block device formats, non-raw protocols,
etc. This makes it different from vhost-net.
So it begs the question, is it going to be used in production, or just a
useful reference tool?
Paolo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists