[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120301211551.GD13104@quack.suse.cz>
Date: Thu, 1 Mar 2012 22:15:51 +0100
From: Jan Kara <jack@...e.cz>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Jan Kara <jack@...e.cz>, Fengguang Wu <fengguang.wu@...el.com>,
Greg Thelen <gthelen@...gle.com>,
Ying Han <yinghan@...gle.com>,
"hannes@...xchg.org" <hannes@...xchg.org>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
Rik van Riel <riel@...hat.com>, Mel Gorman <mgorman@...e.de>,
Minchan Kim <minchan.kim@...il.com>,
Linux Memory Management List <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 5/9] writeback: introduce the pageout work
On Thu 01-03-12 11:42:01, Andrew Morton wrote:
> On Thu, 1 Mar 2012 12:04:04 +0100
> Jan Kara <jack@...e.cz> wrote:
>
> > > iirc, the way I "grabbed" the page was to actually lock it, with
> > > [try_]_lock_page(). And unlock it again way over within the writeback
> > > thread. I forget why I did it this way, rather than get_page() or
> > > whatever. Locking the page is a good way of preventing anyone else
> > > from futzing with it. It also pins the inode, which perhaps meant that
> > > with careful management, I could avoid the igrab()/iput() horrors
> > > discussed above.
> >
> > I think using get_page() might be a good way to go.
>
> get_page() doesn't pin the inode - truncate() will still detach it
> from the address_space().
Yes, I know. And exactly because of that I'd like to use it. Flusher
thread would lock the page from the work item, check whether it is still
attached to the inode and if yes, it will proceed. Otherwise it will just
discard the work item because we know the page has already been written out
by someone else or truncated.
Honza
--
Jan Kara <jack@...e.cz>
SUSE Labs, CR
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists