[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <874lm4tfw3.fsf@yhuang-dev.intel.com>
Date: Mon, 26 Feb 2018 14:38:04 +0800
From: "Huang\, Ying" <ying.huang@...el.com>
To: Minchan Kim <minchan@...nel.org>
Cc: Jan Kara <jack@...e.cz>, Andrew Morton <akpm@...ux-foundation.org>,
<linux-mm@...ck.org>, <linux-kernel@...r.kernel.org>,
Mel Gorman <mgorman@...hsingularity.net>,
Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...e.com>,
<linux-fsdevel@...r.kernel.org>, Al Viro <viro@...IV.linux.org.uk>
Subject: Re: [PATCH] mm: Fix races between address_space dereference and free in page_evicatable
Minchan Kim <minchan@...nel.org> writes:
> Hi Jan,
>
> On Mon, Feb 19, 2018 at 11:57:35AM +0100, Jan Kara wrote:
>> Hi Minchan,
>>
>> On Sun 18-02-18 18:22:45, Minchan Kim wrote:
>> > On Mon, Feb 12, 2018 at 04:12:27PM +0800, Huang, Ying wrote:
>> > > From: Huang Ying <ying.huang@...el.com>
>> > >
>> > > When page_mapping() is called and the mapping is dereferenced in
>> > > page_evicatable() through shrink_active_list(), it is possible for the
>> > > inode to be truncated and the embedded address space to be freed at
>> > > the same time. This may lead to the following race.
>> > >
>> > > CPU1 CPU2
>> > >
>> > > truncate(inode) shrink_active_list()
>> > > ... page_evictable(page)
>> > > truncate_inode_page(mapping, page);
>> > > delete_from_page_cache(page)
>> > > spin_lock_irqsave(&mapping->tree_lock, flags);
>> > > __delete_from_page_cache(page, NULL)
>> > > page_cache_tree_delete(..)
>> > > ... mapping = page_mapping(page);
>> > > page->mapping = NULL;
>> > > ...
>> > > spin_unlock_irqrestore(&mapping->tree_lock, flags);
>> > > page_cache_free_page(mapping, page)
>> > > put_page(page)
>> > > if (put_page_testzero(page)) -> false
>> > > - inode now has no pages and can be freed including embedded address_space
>> > >
>> > > mapping_unevictable(mapping)
>> > > test_bit(AS_UNEVICTABLE, &mapping->flags);
>> > > - we've dereferenced mapping which is potentially already free.
>> > >
>> > > Similar race exists between swap cache freeing and page_evicatable() too.
>> > >
>> > > The address_space in inode and swap cache will be freed after a RCU
>> > > grace period. So the races are fixed via enclosing the page_mapping()
>> > > and address_space usage in rcu_read_lock/unlock(). Some comments are
>> > > added in code to make it clear what is protected by the RCU read lock.
>> >
>> > Is it always true for every FSes, even upcoming FSes?
>> > IOW, do we have any strict rule FS folks must use RCU(i.e., call_rcu)
>> > to destroy inode?
>> >
>> > Let's cc linux-fs.
>>
>> That's actually a good question. Pathname lookup relies on inodes being
>> protected by RCU so "normal" filesystems definitely need to use RCU freeing
>> of inodes. OTOH a filesystem could in theory refuse any attempt for RCU
>> pathname walk (in its .d_revalidate/.d_compare callback) and then get away
>> with freeing its inodes normally AFAICT. I don't see that happening
>> anywhere in the tree but in theory it is possible with some effort... But
>> frankly I don't see a good reason for that so all we should do is to
>> document that .destroy_inode needs to free the inode structure through RCU
>> if it uses page cache? Al?
>
> Yub, it would be much better. However, how does this patch fix the problem?
> Although it can make only page_evictable safe, we could go with the page
> further and finally uses page->mapping, again.
> For instance,
>
> shrink_active_list
> page_evictable();
> ..
> page_referened()
> page_rmapping
> page->mapping
This only checks the value of page->mapping, not deference
page->mapping. So it should be safe.
Best Regards,
Huang, Ying
> I think caller should lock the page to protect entire operation, which
> have been used more widely to pin a address_space.
>
> Thanks.
Powered by blists - more mailing lists