[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LFD.2.00.1004112332380.18009@localhost.localdomain>
Date: Sun, 11 Apr 2010 23:54:34 +0200 (CEST)
From: Thomas Gleixner <tglx@...utronix.de>
To: Andi Kleen <andi@...stfloor.org>
cc: Avi Kivity <avi@...hat.com>, Ben Gamari <bgamari.foss@...il.com>,
Arjan van de Ven <arjan@...radead.org>,
LKML <linux-kernel@...r.kernel.org>, tytso@....edu,
npiggin@...e.de, Ingo Molnar <mingo@...e.hu>,
Ruald Andreae <ruald.a@...il.com>,
Jens Axboe <jens.axboe@...cle.com>,
Olly Betts <olly@...vex.com>,
martin f krafft <madduck@...duck.net>
Subject: Re: Poor interactive performance with I/O loads with fsync()ing
On Sun, 11 Apr 2010, Andi Kleen wrote:
> > XFS does not do much better. Just moved my VM images back to ext for
> > that reason.
>
> Did you move from XFS to ext3? ext3 defaults to barriers off, XFS on,
> which can make a big difference depending on the disk. You can
> disable them on XFS too of course, with the known drawbacks.
>
> XFS also typically needs some tuning to get reasonable log sizes.
>
> My point was merely (before people chime in with counter examples)
> that XFS/btrfs/jfs don't suffer from the "need to sync all transactions for
> every fsync" issue. There can (and will be) still other issues.
Yes, I moved them back from XFS to ext3 simply because moving them
from ext3 to XFS turned out to be a completely unusable disaster.
I know that I can tweak knobs on XFS (or any other file system), but I
would not have expected that it sucks that much for KVM with the
default settings which are perfectly fine for the other use cases
which made us move to XFS.
Thanks,
tglx
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists