[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200903271227.07424.Martin@lichtvoll.de>
Date: Fri, 27 Mar 2009 12:27:01 +0100
From: Martin Steigerwald <Martin@...htvoll.de>
To: Jesse Barnes <jbarnes@...tuousgeek.org>
Cc: Theodore Tso <tytso@....edu>, Ingo Molnar <mingo@...e.hu>,
Alan Cox <alan@...rguk.ukuu.org.uk>,
Arjan van de Ven <arjan@...radead.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Nick Piggin <npiggin@...e.de>,
Jens Axboe <jens.axboe@...cle.com>,
David Rees <drees76@...il.com>, Jesper Krogh <jesper@...gh.cc>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: Linux 2.6.29
Am Mittwoch 25 März 2009 schrieb Jesse Barnes:
> On Tue, 24 Mar 2009 09:20:32 -0400
>
> Theodore Tso <tytso@....edu> wrote:
> > They don't solve the problem where there is a *huge* amount of writes
> > going on, though --- if something is dirtying pages at a rate far
> > greater than the local disk can write it out, say, either "dd
> > if=/dev/zero of=/mnt/make-lots-of-writes" or a massive distcc cluster
> > driving a huge amount of data towards a single system or a wget over
> > a local 100 megabit ethernet from a massive NFS server where
> > everything is in cache, then you can have a major delay with the
> > fsync().
>
> You make it sound like this is hard to do... I was running into this
> problem *every day* until I moved to XFS recently. I'm running a
> fairly beefy desktop (VMware running a crappy Windows install w/AV junk
> on it, builds, icecream and large mailboxes) and have a lot of RAM, but
> it became unusable for minutes at a time, which was just totally
> unacceptable, thus the switch. Things have been better since, but are
> still a little choppy.
>
> I remember early in the 2.6.x days there was a lot of focus on making
> interactive performance good, and for a long time it was. But this I/O
> problem has been around for a *long* time now... What happened? Do not
> many people run into this daily? Do all the filesystem hackers run
> with special mount options to mitigate the problem?
Well I always had the feeling that somewhen from one 2.6.x to another I/O
latencies increased a lot. But first I thought I was just imaging this
and when I more and more thought that this is for real, I forgot since
when I observed these increased latencies.
This is on IBM ThinkPad T42 and T23 with XFS.
I/O latencies are pathetic when dpkg reads in the database or I do tar -xf
linux-x.y.z.tar.bz2.
I never got down to what is causing these higher latencies though also I
tried different I/O schedulers, tuned XFS options, used relatime.
What I found tough is that on XFS at least a tar -xf linux-kernel / rm -rf
linux-kernel operation is way slower with barriers and write cache
enabled that with no barriers and no write cache enabled. And frankly I
never got that.
XFS crawls to a stop on metadata operations when barriers are enabled.
According to the XFS FAQ disabling drive write cache should be as safe as
enabling barriers. And I always unterstood barriers as a feature to be
have *some* ordering contraints, i.e. write before barrier go before
barrier and writes after it after it - even when a drives hardware write
cache is involved. But when this cache is enabled ordering will always be
like issued from Linux block layer cause all I/Os issued to the drive are
write-through and synchron without write cache, versus only barrier
requests are synchron with barriers and write cache.
--
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA B82F 991B EAAC A599 84C7
Download attachment "signature.asc " of type "application/pgp-signature" (198 bytes)
Powered by blists - more mailing lists