[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LFD.2.00.0903260940000.3032@localhost.localdomain>
Date: Thu, 26 Mar 2009 09:53:36 -0700 (PDT)
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Theodore Tso <tytso@....edu>
cc: Andrew Morton <akpm@...ux-foundation.org>,
Frans Pop <elendil@...net.nl>, mingo@...e.hu, jack@...e.cz,
alan@...rguk.ukuu.org.uk, arjan@...radead.org,
a.p.zijlstra@...llo.nl, npiggin@...e.de, jens.axboe@...cle.com,
drees76@...il.com, jesper@...gh.cc, linux-kernel@...r.kernel.org,
oleg@...hat.com, roland@...hat.com
Subject: Re: relatime: update once per day patches (was: ext3 IO latency
measurements)
On Thu, 26 Mar 2009, Theodore Tso wrote:
>
> I've always thought the right approach would be to have a "atime
> dirty" flag, and update atime, but never flush it out to disk unless
> (a) we're about to unmount the disk, or (b) we need to update some
> other inode in the same inode table block, or (c) we have memory
> pressure and we're trying to evict the inode from the inode cache.
I tried to do that a few years ago (ok, probably more than a few by now).
It was surprisingly hard.
Some of it is absolutely trivial: we already have multiple "dirty" flags
for the inode (I_DIRTY_SYNC vs I_DIRTY_DATASYNC vs I_DIRTY_PAGES). Adding
a I_DIRTY_ATIME bit for unimportant data was trivial.
But at least back then, "sync_inode()" (or whatever) was called without
the reason for doing the sync, so it was really hard to decide whether to
write things out or not.
That may actually have changed these days. We now have that
"writeback_control" thing that we pass around for all the IO.
Heh. I just looked back in the history. That writeback_control thing was
added back in 2002, so it's a _really_ long time since I tried to do that
whole atime thing.
Maybe it's really easy these days.
Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists