lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120717094526.GC7949@redhat.com>
Date:	Tue, 17 Jul 2012 12:45:26 +0300
From:	"Michael S. Tsirkin" <mst@...hat.com>
To:	Paolo Bonzini <pbonzini@...hat.com>
Cc:	Asias He <asias@...hat.com>, Stefan Hajnoczi <stefanha@...il.com>,
	linux-kernel@...r.kernel.org, linux-aio@...ck.org,
	kvm@...r.kernel.org, virtualization@...ts.linux-foundation.org,
	Benjamin LaHaise <bcrl@...ck.org>,
	Alexander Viro <viro@...iv.linux.org.uk>,
	linux-fsdevel@...r.kernel.org
Subject: Re: [PATCH 0/5] Add vhost-blk support

On Tue, Jul 17, 2012 at 10:52:10AM +0200, Paolo Bonzini wrote:
> Il 17/07/2012 10:29, Asias He ha scritto:
> > So, vhost-blk at least saves ~6 syscalls for us in each request. 
> 
> Are they really 6?  If I/O is coalesced by a factor of 3, for example
> (i.e. each exit processes 3 requests), it's really 2 syscalls per request.
> 
> Also, is there anything we can improve?  Perhaps we can modify epoll and
> ask it to clear the eventfd for us (would save 2 reads)?  Or
> io_getevents (would save 1)?
> 
> > I guess you mean qemu here. Yes, in theory, qemu's block layer can be
> > improved to achieve similar performance as vhost-blk or kvm tool's
> > userspace virito-blk has. But I think it makes no sense to prevent one
> > solution becase there is another in theory solution called: we can do
> > similar in qemu.
> 
> It depends.  Like vhost-scsi, vhost-blk has the problem of a crippled
> feature set: no support for block device formats, non-raw protocols,
> etc.  This makes it different from vhost-net.

Well vhost-net is also more limited than virtio-net: no support for
userspace networking, no support for level interrupts,
no support for legacy qemu vlans, can not trace datapath in userspace,
only virtio is supported. None of these is fundamental but this is
how our implementation currently behaves so from user's point of view
that's how it is.  There are also fundamental limitations - e.g.
it's linux only, a special module needs to be loaded and user needs to
get an fd to the char device ...  The way we addressed it, is by making it seamless for
the user: basically if your setup matches what vhost-net can
accelerate, it gets enabled, if not - userspace is used. Most of the
logic is in libvirt.

> So it begs the question, is it going to be used in production, or just a
> useful reference tool?
> 
> Paolo

Sticking to raw already makes virtio-blk faster, doesn't it?
In that vhost-blk looks to me like just another optimization option.
Ideally I think user just should not care where do we handle virtio:
in-kernel or in userspace.  One can imagine it being enabled/disabled
automatically if none of the features unsupported by it are used.

For example currently you specify vhost=on for tap backend and
then if you try to setup an unsupported by it like level interrupts,
it gets disabled and userspace virtio is used.

-- 
MST
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ