lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110427220634.GA26316@dumpdata.com>
Date:	Wed, 27 Apr 2011 18:06:34 -0400
From:	Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>
To:	Christoph Hellwig <hch@...radead.org>
Cc:	Ian Campbell <Ian.Campbell@...rix.com>,
	"jaxboe@...ionio.com" <jaxboe@...ionio.com>,
	"xen-devel@...ts.xensource.com" <xen-devel@...ts.xensource.com>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	Stefano Stabellini <Stefano.Stabellini@...citrix.com>,
	"konrad@...nel.org" <konrad@...nel.org>, vgoyal@...hat.com,
	alyssar@...gle.com
Subject: Re: [Xen-devel] Re: [PATCH v3] xen block backend.

On Thu, Apr 21, 2011 at 04:04:12AM -0400, Christoph Hellwig wrote:
> On Thu, Apr 21, 2011 at 08:28:45AM +0100, Ian Campbell wrote:
> > On Thu, 2011-04-21 at 04:37 +0100, Christoph Hellwig wrote:
> > > This should sit in userspace.  And last time was discussed the issue
> > > Stefano said the qemu Xen disk backend is just as fast as this kernel
> > > code.  And that's with an not even very optimized codebase yet.
> > 
> > Stefano was comparing qdisk to blktap. This patch is blkback which is a
> > completely in-kernel driver which exports raw block devices to guests,
> > e.g. it's very useful in conjunction with LVM, iSCSI, etc. The last
> > measurements I heard was that qdisk was around 15% down compared to
> > blkback.
> 
> Please show real numbers on why adding this to kernel space is required.

First off, many thanks go out to Alyssa Wilk and Vivek Goyal.

Alyssa for cluing me on the CPU banding problem (on the first machine I was
doing the testing  I hit the CPU ceiling and had quite skewed results).
Vivek for helping me figure out why the kernel blkback was sucking when a READ
request got added on the stream of WRITEs with CFQ scheduler (I did not the
REQ_SYNC on the WRITE request).
  
The setup is as follow:

iSCSI target - running Linux v2.6.39-rc4 with TCM LIO-4.1 patches (which
provide iSCSI and Fibre target support) [1]. I export this 10GB RAMdisk over
a 1GB network connection.

iSCSI initiator - Sandy Bridge i3-2100 3.1GHz w/8GB, runs v2.6.39-rc4
 with pv-ops patches [2]. Either 32-bit or 64-bit, and with Xen-unstable
 (c/s 23246), Xen QEMU (e073e69457b4d99b6da0b6536296e3498f7f6599) with
 one patch to enable aio [3]. Upstream QEMU version is quite close to this
 one (it has a bug-fix in it). Memory limited to Dom0/DomU to 2GB.
 I boot of PXE and run everything from the ramdisk.

The kernel/initramfs that I am using for this testing is the same
throughout and is based off VirtualIron's build system [4].

There are two tests, each test is run three times.

The first is random writes of 64K across the disk with four threads
doing this pounding. The results are in the 'randw-bw.png' file.

The second is based off IOMeter - it does random reads (20%) and writes
(80%), with various byte sizes : from 512 bytes up to 64K - two threads
doing it. The results are in the 'iometer-bw.png' file.

Attached is also the 'write' and 'iometer' fio files I used.

The guest config files are quite simple. They look as so:

kernel="/mnt/lab/latest/vmlinuz"
ramdisk="/mnt/lab/latest/initramfs.cpio.gz"
extra="console=hvc0 debug earlyprintk=xenboot"
memory=2048
maxmem=2048
vcpus=2
name="phy-xvda"
on_crash="preserve"
vif = [ 'bridge=switch' ]
vfb = [ 'vnc=1, vnclisten=0.0.0.0,vncunused=1']
disk = [ 'phy:/dev/sdb,xvda,w']

or to use QEMU qdisk:

kernel="/mnt/lab/latest/vmlinuz"
ramdisk="/mnt/lab/latest/initramfs.cpio.gz"
extra="console=hvc0 debug earlyprintk=xenboot"
memory=2048
maxmem=2048
vcpus=2
name="qdisk-xvda"
on_crash="preserve"
vif = [ 'bridge=switch' ]
vfb = [ 'vnc=1, vnclisten=0.0.0.0,vncunused=1']
disk = [ 'file:/dev/sdb,xvda,w']

/dev/sdb is naturally the LIO TCM RAMDISK.

[1]: git://git.kernel.org/pub/scm/linux/kernel/git/nab/lio-core-2.6.git #lio-4.1
[2]: git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git #devel/next-2.6.39
[3]: http://darnok.org/xen/qdisk_vs_blkback_v3.1/qemu-enable-aio.patch
[4]: git://xenbits.xensource.com/xentesttools/bootstrap.git


Download attachment "iometer-bw.png" of type "image/png" (10080 bytes)

Download attachment "randw-bw.png" of type "image/png" (9739 bytes)

View attachment "iometer" of type "text/plain" (495 bytes)

View attachment "write" of type "text/plain" (430 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ