[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080319220318.GK155407@sgi.com>
Date: Thu, 20 Mar 2008 09:03:18 +1100
From: David Chinner <dgc@....com>
To: Fengguang Wu <wfg@...l.ustc.edu.cn>
Cc: David Chinner <dgc@....com>, lkml <linux-kernel@...r.kernel.org>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>
Subject: Re: [PATCH] A deadlock free and best try version of drop_caches()
On Wed, Mar 19, 2008 at 07:27:29PM +0800, Fengguang Wu wrote:
> On Tue, Mar 18, 2008 at 10:28:44PM +1100, David Chinner wrote:
> > Looks like everything is backed up on the inode_lock. Why? Looks
> > like drop_pagecache_sb() is doing something ..... suboptimal.
......
> > Anyone know the reason why drop_pagecache_sb() uses such a brute-force
> > mechanism to free up clean page cache pages?
>
> Because extensive use of it(out of testings) is discouraged? ;-)
>
> I have been running a longer but safer version, let's merge it?
So you walk the inode hash to find inodes? Seems like a nice idea on
the surface.... Won't it need to hold the iprune_mutex to prevent
races with prune_icache() and invalidate_list()?
Hmmmm - what about unhashed inodes? We'll never see them with this
method of traversal. I ask because I'm working on some prototype
patches for XFS that avoid using the inode hash altogether and drive
inode lookup from the multitude of radix trees we have per filesystem
(for parallelised and lockless inode lookup).
The above scanning method would not work at all with that sort of
filesystem structure. Perhaps combining the bulk get/put with Jan's
get/put method for walking the sb inode list would be sufficient?
Cheers,
Dave.
--
Dave Chinner
Principal Engineer
SGI Australian Software Group
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists