lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <AANLkTik2T53BeDl7KWZEixVqXbZ1IajkQlTHBO41Qj2V@mail.gmail.com>
Date:	Sat, 17 Jul 2010 07:29:00 +0200
From:	Giangiacomo Mariotti <gg.mariotti@...il.com>
To:	Christoph Hellwig <hch@...radead.org>
Cc:	linux-kernel@...r.kernel.org
Subject: Re: BTRFS: Unbelievably slow with kvm/qemu

On Wed, Jul 14, 2010 at 9:49 PM, Christoph Hellwig <hch@...radead.org> wrote:
> There are a lot of variables when using qemu.
>
> The most important one are:
>
>  - the cache mode on the device.  The default is cache=writethrough,
>   which is not quite optimal.  You generally do want to use cache=none
>   which uses O_DIRECT in qemu.
>  - if the backing image is sparse or not.
>  - if you use barrier - both in the host and the guest.
>
> Below I have a table comparing raw blockdevices, xfs, btrfs, ext4 and
> ext3.  For ext3 we also compare the default, unsafe barrier=0 version
> and the barrier=1 version you should use if you actually care about
> your data.
>
> The comparism is a simple untar of a Linux 2.6.34 tarball, including a
> sync after it.  We run this with ext3 in the guest, either using the
> default barrier=0, or for the later tests also using barrier=1.  It
> is done on an OCZ Vertext SSD, which gets reformatted and fully TRIMed
> before each test.
>
> As you can see you generally do want to use cache=none and every
> filesystem is about the same speed for that - except that on XFS you
> also really need preallocation.  What's interesting is how bad btrfs
> is for the default compared to the others, and that for many filesystems
> things actually get minimally faster when enabling barriers in the
> guest.  Things will look very different for barrier heavy guest, I'll
> do another benchmark for those.
>
>                                                        bdev            xfs             btrfs           ext4            ext3            ext3 (barrier)
>
> cache=writethrough      nobarrier       sparse          0m27.183s       0m42.552s       2m28.929s       0m33.749s       0m24.975s       0m37.105s
> cache=writethrough      nobarrier       prealloc        -               0m32.840s       2m28.378s       0m34.233s       -               -
>
> cache=none              nobarrier       sparse          0m21.988s       0m49.758s       0m24.819s       0m23.977s       0m22.569s       0m24.938s
> cache=none              nobarrier       prealloc        -               0m24.464s       0m24.646s       0m24.346s       -               -
>
> cache=none              barrier         sparse          0m21.526s       0m41.158s       0m24.403s       0m23.924s       0m23.040s       0m23.272s
> cache=none              barrier         prealloc        -               0m23.944s       0m24.284s       0m23.981s       -               -
>
Very interesting. I haven't had the time to try it again, but now I'm
gonna try some options about the cache and see what gives me the best
results.

-- 
Giangiacomo

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ