lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110428192955.GI32595@reaktio.net>
Date:	Thu, 28 Apr 2011 22:29:56 +0300
From:	Pasi Kärkkäinen <pasik@....fi>
To:	Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>
Cc:	Christoph Hellwig <hch@...radead.org>,
	"xen-devel@...ts.xensource.com" <xen-devel@...ts.xensource.com>,
	Ian Campbell <Ian.Campbell@...rix.com>,
	Stefano Stabellini <Stefano.Stabellini@...citrix.com>,
	"jaxboe@...ionio.com" <jaxboe@...ionio.com>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	alyssar@...gle.com, "konrad@...nel.org" <konrad@...nel.org>,
	vgoyal@...hat.com
Subject: Re: [Xen-devel] Re: [PATCH v3] xen block backend.

On Wed, Apr 27, 2011 at 06:06:34PM -0400, Konrad Rzeszutek Wilk wrote:
> On Thu, Apr 21, 2011 at 04:04:12AM -0400, Christoph Hellwig wrote:
> > On Thu, Apr 21, 2011 at 08:28:45AM +0100, Ian Campbell wrote:
> > > On Thu, 2011-04-21 at 04:37 +0100, Christoph Hellwig wrote:
> > > > This should sit in userspace.  And last time was discussed the issue
> > > > Stefano said the qemu Xen disk backend is just as fast as this kernel
> > > > code.  And that's with an not even very optimized codebase yet.
> > > 
> > > Stefano was comparing qdisk to blktap. This patch is blkback which is a
> > > completely in-kernel driver which exports raw block devices to guests,
> > > e.g. it's very useful in conjunction with LVM, iSCSI, etc. The last
> > > measurements I heard was that qdisk was around 15% down compared to
> > > blkback.
> > 
> > Please show real numbers on why adding this to kernel space is required.
> 
> First off, many thanks go out to Alyssa Wilk and Vivek Goyal.
> 
> Alyssa for cluing me on the CPU banding problem (on the first machine I was
> doing the testing  I hit the CPU ceiling and had quite skewed results).
> Vivek for helping me figure out why the kernel blkback was sucking when a READ
> request got added on the stream of WRITEs with CFQ scheduler (I did not the
> REQ_SYNC on the WRITE request).
>   
> The setup is as follow:
> 
> iSCSI target - running Linux v2.6.39-rc4 with TCM LIO-4.1 patches (which
> provide iSCSI and Fibre target support) [1]. I export this 10GB RAMdisk over
> a 1GB network connection.
> 
> iSCSI initiator - Sandy Bridge i3-2100 3.1GHz w/8GB, runs v2.6.39-rc4
>  with pv-ops patches [2]. Either 32-bit or 64-bit, and with Xen-unstable
>  (c/s 23246), Xen QEMU (e073e69457b4d99b6da0b6536296e3498f7f6599) with
>  one patch to enable aio [3]. Upstream QEMU version is quite close to this
>  one (it has a bug-fix in it). Memory limited to Dom0/DomU to 2GB.
>  I boot of PXE and run everything from the ramdisk.
> 
> The kernel/initramfs that I am using for this testing is the same
> throughout and is based off VirtualIron's build system [4].
> 
> There are two tests, each test is run three times.
> 
> The first is random writes of 64K across the disk with four threads
> doing this pounding. The results are in the 'randw-bw.png' file.
> 
> The second is based off IOMeter - it does random reads (20%) and writes
> (80%), with various byte sizes : from 512 bytes up to 64K - two threads
> doing it. The results are in the 'iometer-bw.png' file.
> 

A summary for those who don't bother checking the attachments :)

xen-blkback (kernel) backend seems to perform a lot better
than qemu qdisc (usermode) backend.

Also cpu-usage is smaller with the kernel-backend driver.
Detailed numbers in the attachments in Konrad's previous email.

-- Pasi

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ