[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87r2rixdbw.fsf@yhuang-dev.intel.com>
Date: Tue, 26 Dec 2017 13:33:55 +0800
From: "Huang\, Ying" <ying.huang@...el.com>
To: Minchan Kim <minchan@...nel.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>, <linux-mm@...ck.org>,
<linux-kernel@...r.kernel.org>, Hugh Dickins <hughd@...gle.com>,
"Paul E . McKenney" <paulmck@...ux.vnet.ibm.com>,
Johannes Weiner <hannes@...xchg.org>,
Tim Chen <tim.c.chen@...ux.intel.com>,
Shaohua Li <shli@...com>,
Mel Gorman <mgorman@...hsingularity.net>,
J�r�me Glisse
<jglisse@...hat.com>, Michal Hocko <mhocko@...e.com>,
Andrea Arcangeli <aarcange@...hat.com>,
David Rientjes <rientjes@...gle.com>,
Rik van Riel <riel@...hat.com>, Jan Kara <jack@...e.cz>,
Dave Jiang <dave.jiang@...el.com>,
Aaron Lu <aaron.lu@...el.com>, Mel Gorman <mgorman@...e.de>
Subject: Re: [PATCH -V4 -mm] mm, swap: Fix race between swapoff and some swap operations
Minchan Kim <minchan@...nel.org> writes:
> On Fri, Dec 22, 2017 at 10:14:43PM +0800, Huang, Ying wrote:
>> Minchan Kim <minchan@...nel.org> writes:
>>
>> > On Thu, Dec 21, 2017 at 03:48:56PM +0800, Huang, Ying wrote:
>> >> Minchan Kim <minchan@...nel.org> writes:
>> >>
>> >> > On Wed, Dec 20, 2017 at 09:26:32AM +0800, Huang, Ying wrote:
>> >> >> From: Huang Ying <ying.huang@...el.com>
>> >> >>
>> >> >> When the swapin is performed, after getting the swap entry information
>> >> >> from the page table, system will swap in the swap entry, without any
>> >> >> lock held to prevent the swap device from being swapoff. This may
>> >> >> cause the race like below,
>> >> >>
>> >> >> CPU 1 CPU 2
>> >> >> ----- -----
>> >> >> do_swap_page
>> >> >> swapin_readahead
>> >> >> __read_swap_cache_async
>> >> >> swapoff swapcache_prepare
>> >> >> p->swap_map = NULL __swap_duplicate
>> >> >> p->swap_map[?] /* !!! NULL pointer access */
>> >> >>
>> >> >> Because swapoff is usually done when system shutdown only, the race
>> >> >> may not hit many people in practice. But it is still a race need to
>> >> >> be fixed.
>> >> >>
>> >> >> To fix the race, get_swap_device() is added to check whether the
>> >> >> specified swap entry is valid in its swap device. If so, it will keep
>> >> >> the swap entry valid via preventing the swap device from being
>> >> >> swapoff, until put_swap_device() is called.
>> >> >>
>> >> >> Because swapoff() is very race code path, to make the normal path runs
>> >> >> as fast as possible, RCU instead of reference count is used to
>> >> >> implement get/put_swap_device(). From get_swap_device() to
>> >> >> put_swap_device(), the RCU read lock is held, so synchronize_rcu() in
>> >> >> swapoff() will wait until put_swap_device() is called.
>> >> >>
>> >> >> In addition to swap_map, cluster_info, etc. data structure in the
>> >> >> struct swap_info_struct, the swap cache radix tree will be freed after
>> >> >> swapoff, so this patch fixes the race between swap cache looking up
>> >> >> and swapoff too.
>> >> >>
>> >> >> Cc: Hugh Dickins <hughd@...gle.com>
>> >> >> Cc: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
>> >> >> Cc: Minchan Kim <minchan@...nel.org>
>> >> >> Cc: Johannes Weiner <hannes@...xchg.org>
>> >> >> Cc: Tim Chen <tim.c.chen@...ux.intel.com>
>> >> >> Cc: Shaohua Li <shli@...com>
>> >> >> Cc: Mel Gorman <mgorman@...hsingularity.net>
>> >> >> Cc: "Jrme Glisse" <jglisse@...hat.com>
>> >> >> Cc: Michal Hocko <mhocko@...e.com>
>> >> >> Cc: Andrea Arcangeli <aarcange@...hat.com>
>> >> >> Cc: David Rientjes <rientjes@...gle.com>
>> >> >> Cc: Rik van Riel <riel@...hat.com>
>> >> >> Cc: Jan Kara <jack@...e.cz>
>> >> >> Cc: Dave Jiang <dave.jiang@...el.com>
>> >> >> Cc: Aaron Lu <aaron.lu@...el.com>
>> >> >> Signed-off-by: "Huang, Ying" <ying.huang@...el.com>
>> >> >>
>> >> >> Changelog:
>> >> >>
>> >> >> v4:
>> >> >>
>> >> >> - Use synchronize_rcu() in enable_swap_info() to reduce overhead of
>> >> >> normal paths further.
>> >> >
>> >> > Hi Huang,
>> >>
>> >> Hi, Minchan,
>> >>
>> >> > This version is much better than old. To me, it's due to not rcu,
>> >> > srcu, refcount thing but it adds swap device dependency(i.e., get/put)
>> >> > into every swap related functions so users who don't interested on swap
>> >> > don't need to care of it. Good.
>> >> >
>> >> > The problem is caused by freeing by swap related-data structure
>> >> > *dynamically* while old swap logic was based on static data
>> >> > structure(i.e., never freed and the verify it's stale).
>> >> > So, I reviewed some places where use PageSwapCache and swp_entry_t
>> >> > which could make access of swap related data structures.
>> >> >
>> >> > A example is __isolate_lru_page
>> >> >
>> >> > It calls page_mapping to get a address_space.
>> >> > What happens if the page is on SwapCache and raced with swapoff?
>> >> > The mapping got could be disappeared by the race. Right?
>> >>
>> >> Yes. We should think about that. Considering the file cache pages, the
>> >> address_space backing the file cache pages may be freed dynamically too.
>> >> So to use page_mapping() return value for the file cache pages, some
>> >> kind of locking is needed to guarantee the address_space isn't freed
>> >> under us. Page may be locked, or under writeback, or some other locks
>> >
>> > I didn't look at the code in detail but I guess every file page should
>> > be freed before the address space destruction and page_lock/lru_lock makes
>> > the work safe, I guess. So, it wouldn't be a problem.
>> >
>> > However, in case of swapoff, it doesn't remove pages from LRU list
>> > so there is no lock to prevent the race at this moment. :(
>>
>> Take a look at file cache pages and file cache address_space freeing
>> code path. It appears that similar situation is possible for them too.
>>
>> The file cache pages will be delete from file cache address_space before
>> address_space (embedded in inode) is freed. But they will be deleted
>> from LRU list only when its refcount dropped to zero, please take a look
>> at put_page() and release_pages(). While address_space will be freed
>> after putting reference to all file cache pages. If someone holds a
>> reference to a file cache page for quite long time, it is possible for a
>> file cache page to be in LRU list after the inode/address_space is
>> freed.
>>
>> And I found inode/address_space is freed witch call_rcu(). I don't know
>> whether this is related to page_mapping().
>>
>> This is just my understanding.
>
> Hmm, it smells like a bug of __isolate_lru_page.
>
> Ccing Mel:
>
> What locks protects address_space destroying when race happens between
> inode trauncation and __isolate_lru_page?
>
>>
>> >> need to be held, for example, page table lock, or lru_lock, etc. For
>> >> __isolate_lru_page(), lru_lock will be held when it is called. And we
>> >> will call synchronize_rcu() between clear PageSwapCache and free swap
>> >> cache, so the usage of swap cache in __isolate_lru_page() should be
>> >> safe. Do you think my analysis makes sense?
>> >
>> > I don't understand how synchronize_rcu closes the race with spin_lock.
>> > Paul might help it.
>>
>> Per my understanding, spin_lock() will preempt_disable(), so
>> synchronize_rcu() will wait until spin_unlock() is called.
>>
>> > Even if we solve it, there is a other problem I spot.
>> > When I see migrate_vma_pages, it pass mapping to migrate_page which
>> > accesses mapping->tree_lock unconditionally even though the address_space
>> > is already gone.
>>
>> Before migrate_vma_pages() is called, migrate_vma_prepare() is called,
>> where pages are locked. So it is safe.
>
> I missed that. You're right. It's no problem. Thanks.
>
>>
>> > Hmm, I didn't check all sites where uses PageSwapCache, swp_entry_t
>> > but gut feeling is it would be not simple.
>>
>> Yes. We should check all sites. Thanks for your help!
>
> You might start checking already and found it.
> Many architectures use page_mapping in cache flush code so we should
> check there, too.
Thanks for your reminding! I will check them.
Best Regards,
Huang, Ying
Powered by blists - more mailing lists