lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <50052276.2080906@redhat.com>
Date:	Tue, 17 Jul 2012 16:29:42 +0800
From:	Asias He <asias@...hat.com>
To:	Stefan Hajnoczi <stefanha@...il.com>
CC:	linux-kernel@...r.kernel.org, linux-aio@...ck.org,
	kvm@...r.kernel.org, "Michael S. Tsirkin" <mst@...hat.com>,
	virtualization@...ts.linux-foundation.org,
	Benjamin LaHaise <bcrl@...ck.org>,
	Alexander Viro <viro@...iv.linux.org.uk>,
	linux-fsdevel@...r.kernel.org
Subject: Re: [PATCH 0/5] Add vhost-blk support

On 07/16/2012 07:58 PM, Stefan Hajnoczi wrote:
> On Thu, Jul 12, 2012 at 4:35 PM, Asias He <asias@...hat.com> wrote:
>> This patchset adds vhost-blk support. vhost-blk is a in kernel virito-blk
>> device accelerator. Compared to userspace virtio-blk implementation, vhost-blk
>> gives about 5% to 15% performance improvement.
>
> Why is it 5-15% faster?  vhost-blk and the userspace virtio-blk you
> benchmarked should be doing basically the same thing:
>
> 1. An eventfd file descriptor is signalled when the vring has new
> requests available from the guest.
> 2. A thread wakes up and processes the virtqueue.
> 3. Linux AIO is used to issue host I/O.
> 4. An interrupt is injected into the guest.

Yes. This is how both of them work. Though, there are some differences 
in details. e.g.

In vhost-blk, we use the vhost's work infrastructure to handle the 
requests. In kvm tool, we use a dedicated thread.
In vhost-blk, we use irqfd to inject interrupts. In kvm tool, we use 
ioctl to inject interrupts.


> Does the vhost-blk implementation do anything fundamentally different
> from userspace?  Where is the overhead that userspace virtio-blk has?


Currently, no. But we could play with bio directly in vhost-blk as 
Christoph suggested which could make the IO path from guest to host's 
real storage even shorter in vhost-blk.

I've been trying my best to reduce the overhead of virtio-blk at kvm 
tool side. I do not see any significant overhead out there. Compared to 
vhost-blk, the overhead we have in userspace virito-blk is syscalls. In 
each IO request, we have

    epoll_wait() & read(): wait for the eventfd which guest notifies us
    io_submit(): submit the aio
    read(): read the aio complete eventfd
    io_getevents(): reap the aio complete result
    ioctl(): trigger the interrupt

So, vhost-blk at least saves ~6 syscalls for us in each request.

> I'm asking because it would be beneficial to fix the overhead
> (especially it that could speed up all userspace applications) instead
> of adding a special-purpose kernel module to work around the overhead.

I guess you mean qemu here. Yes, in theory, qemu's block layer can be 
improved to achieve similar performance as vhost-blk or kvm tool's 
userspace virito-blk has. But I think it makes no sense to prevent one 
solution becase there is another in theory solution called: we can do 
similar in qemu.

What do you mean by specail-purpose here, we need general-purpose kernel
module? Is vhost-net a special purpose kernel module?  Is xen-blkback a 
special-purpose kernel module? And I think vhost-blk is beneficial to 
qemu too, as well as any other kvm host side implementation.

-- 
Asias


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ