lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sun, 29 Aug 2010 20:14:41 -0400
From:	Josef Bacik <josef@...hat.com>
To:	Tomasz Chmielewski <mangoo@...g.org>
Cc:	linux-kernel@...r.kernel.org, linux-btrfs@...r.kernel.org,
	hch@...radead.org, gg.mariotti@...il.com,
	"Justin P. Mattock" <justinmattock@...il.com>, mjt@....msk.ru,
	josef@...hat.com, tytso@....edu
Subject: Re: BTRFS: Unbelievably slow with kvm/qemu

On Sun, Aug 29, 2010 at 09:34:29PM +0200, Tomasz Chmielewski wrote:
> Christoph Hellwig wrote:
>
>> There are a lot of variables when using qemu.
>>
>> The most important one are:
>>
>>  - the cache mode on the device.  The default is cache=writethrough,
>>    which is not quite optimal.  You generally do want to use cache=none
>>    which uses O_DIRECT in qemu.
>>  - if the backing image is sparse or not.
>>  - if you use barrier - both in the host and the guest.
>
> I noticed that when btrfs is mounted with default options, when writing  
> i.e. 10 GB on the KVM guest using qcow2 image, 20 GB are written on the  
> host (as measured with "iostat -m -p").
>
>
> With ext4 (or btrfs mounted with nodatacow), 10 GB write on a guest  
> produces 10 GB write on the host.
>

Whoa 20gb?  That doesn't sound right, COW should just mean we get quite a bit of
fragmentation, not write everything twice.  What exactly is qemu doing?  Thanks,

Josef
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists