[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4C3C299C.20306@redhat.com>
Date: Tue, 13 Jul 2010 10:53:48 +0200
From: Kevin Wolf <kwolf@...hat.com>
To: Josef Bacik <josef@...hat.com>
CC: Giangiacomo Mariotti <gg.mariotti@...il.com>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>,
Michael Tokarev <mjt@....msk.ru>, linux-kernel@...r.kernel.org,
qemu-devel <qemu-devel@...gnu.org>
Subject: Re: [Qemu-devel] Re: BTRFS: Unbelievably slow with kvm/qemu
Am 12.07.2010 15:43, schrieb Josef Bacik:
> On Mon, Jul 12, 2010 at 03:34:44PM +0200, Giangiacomo Mariotti wrote:
>> On Mon, Jul 12, 2010 at 9:09 AM, Michael Tokarev <mjt@....msk.ru> wrote:
>>>
>>> This looks quite similar to a problem with ext4 and O_SYNC which I
>>> reported earlier but no one cared to answer (or read?) - there:
>>> http://permalink.gmane.org/gmane.linux.file-systems/42758
>>> (sent to qemu-devel and linux-fsdevel lists - Cc'd too). You can
>>> try a few other options, esp. cache=none and re-writing some guest
>>> files to verify.
>>>
>>> /mjt
>>>
>> Either way, changing to cache=none I suspect wouldn't tell me much,
>> because if it's as slow as before, it's still unusable and if instead
>> it's even slower, well it'd be even more unusable, so I wouldn't be
>> able to tell the difference. What I can say for certain is that with
>> the exact same virtual hd file, same options, same system, but on an
>> ext3 fs there's no problem at all, on a Btrfs is not just slower, it
>> takes ages.
>>
>
> O_DIRECT support was just introduced recently, please try on the latest kernel
> with the normal settings (which IIRC uses O_DIRECT), that should make things
> suck alot less.
IIUC, he uses the default cache option of qemu, which is
cache=writethrough and maps to O_DSYNC without O_DIRECT. O_DIRECT would
only be used for cache=none.
Kevin
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists