[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1154797123.12108.6.camel@kleikamp.austin.ibm.com>
Date: Sat, 05 Aug 2006 11:58:43 -0500
From: Dave Kleikamp <shaggy@...tin.ibm.com>
To: Christoph Hellwig <hch@....de>
Cc: Valerie Henson <val_henson@...ux.intel.com>,
linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
Akkana Peck <akkana@...llowsky.com>,
Mark Fasheh <mark.fasheh@...cle.com>,
Jesse Barnes <jesse.barnes@...el.com>,
Arjan van de Ven <arjan@...ux.intel.com>,
Chris Wedgwood <cw@...f.org>, jsipek@...sunysb.edu,
Al Viro <viro@....linux.org.uk>
Subject: Re: [RFC] [PATCH] Relative lazy atime
On Sat, 2006-08-05 at 14:25 +0200, Christoph Hellwig wrote:
> On Wed, Aug 02, 2006 at 11:36:22PM -0700, Valerie Henson wrote:
> > (Corrected Chris Wedgwood's name and email.)
> >
> > My friend Akkana followed my advice to use noatime on one of her
> > machines, but discovered that mutt was unusable because it always
> > thought that new messages had arrived since the last time it had
> > checked a folder (mbox format). I thought this was a bummer, so I
> > wrote a "relative lazy atime" patch which only updates the atime if
> > the old atime was less than the ctime or mtime. This is not the same
> > as the lazy atime patch of yore[1], which maintained a list of inodes
> > with dirty atimes and wrote them out on unmount.
>
> Another idea, similar to how atime updates work in xfs currently might
> be interesting: Always update atime in core, but don't start a
> transaction just for it - instead only flush it when you'd do it anyway,
> that is another transaction or evicting the inode.
Hmm. That adds a cost to evicting what the vfs considers a clean inode.
It seems wrong, but if that's what xfs does, it must not be a problem.
Shaggy
--
David Kleikamp
IBM Linux Technology Center
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists