[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <52D07472.7020601@redhat.com>
Date: Fri, 10 Jan 2014 17:30:10 -0500
From: Rik van Riel <riel@...hat.com>
To: Johannes Weiner <hannes@...xchg.org>,
Andrew Morton <akpm@...ux-foundation.org>
CC: Andi Kleen <andi@...stfloor.org>,
Andrea Arcangeli <aarcange@...hat.com>,
Bob Liu <bob.liu@...cle.com>,
Christoph Hellwig <hch@...radead.org>,
Dave Chinner <david@...morbit.com>,
Greg Thelen <gthelen@...gle.com>,
Hugh Dickins <hughd@...gle.com>, Jan Kara <jack@...e.cz>,
KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
Luigi Semenzato <semenzato@...gle.com>,
Mel Gorman <mgorman@...e.de>,
Metin Doslu <metin@...usdata.com>,
Michel Lespinasse <walken@...gle.com>,
Minchan Kim <minchan.kim@...il.com>,
Ozgun Erdogan <ozgun@...usdata.com>,
Peter Zijlstra <peterz@...radead.org>,
Roman Gushchin <klamm@...dex-team.ru>,
Ryan Mallon <rmallon@...il.com>, Tejun Heo <tj@...nel.org>,
Vlastimil Babka <vbabka@...e.cz>, linux-mm@...ck.org,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [patch 6/9] mm + fs: store shadow entries in page cache
On 01/10/2014 01:10 PM, Johannes Weiner wrote:
> Reclaim will be leaving shadow entries in the page cache radix tree
> upon evicting the real page. As those pages are found from the LRU,
> an iput() can lead to the inode being freed concurrently. At this
> point, reclaim must no longer install shadow pages because the inode
> freeing code needs to ensure the page tree is really empty.
>
> Add an address_space flag, AS_EXITING, that the inode freeing code
> sets under the tree lock before doing the final truncate. Reclaim
> will check for this flag before installing shadow pages.
>
> Signed-off-by: Johannes Weiner <hannes@...xchg.org>
Reviewed-by: Rik van Riel <riel@...hat.com>
--
All rights reversed
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists