[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20101002113238.GF4681@dastard>
Date: Sat, 2 Oct 2010 21:32:38 +1000
From: Dave Chinner <david@...morbit.com>
To: Dave Hansen <dave@...ux.vnet.ibm.com>
Cc: linux-kernel@...r.kernel.org, hch@...radead.org,
lnxninja@...ux.vnet.ibm.com, axboe@...nel.dk, pbadari@...ibm.com
Subject: Re: [RFC][PATCH] try not to let dirty inodes fester
On Fri, Oct 01, 2010 at 12:14:49PM -0700, Dave Hansen wrote:
>
> I've got a bug that I've been investigating. The inode cache for a
> certain fs grows and grows, desptite running
>
> echo 2 > /proc/sys/vm/drop_caches
>
> all the time. Not that running drop_caches is a good idea, but it
> _should_ force things to stay under control. That is, unless the
> inodes are dirty.
What's the filesystem, and what's the test case?
> I think I'm seeing a case where the inode's dentry goes away, it
> hits iput_final(). It is dirty, so it stays off the inode_unused
> list waiting around for writeback.
Right - it should be on the bdi->wb->b_dirty list waiting to be
expired and written back or already of the expired writeback queueѕ
and waiting to be written again.
> Then, the periodic writeback happens, and we end up in
> wb_writeback(). One of the first things we do in the loop (before
> writing out inodes) is this:
>
> if (work->for_background && !over_bground_thresh())
> break;
Sure, but the periodic ->for_kupdate flushing should be writing
any inode older than 30s and should be running every 5s. hence the
background writeback aborting should not be affecting the cleaning
of dirty inodes. Hence I don't think this is the problem your are
looking for.
Without knowing what filesystem or what you are doing to grow the
inode cache, it's pretty hard to say much more than this....
Cheers,
Dave.
--
Dave Chinner
david@...morbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists