[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <6e412428-86c7-9032-a3b6-41980ad667f9@linux.vnet.ibm.com>
Date: Tue, 7 Mar 2017 14:25:48 +0530
From: Anshuman Khandual <khandual@...ux.vnet.ibm.com>
To: Minchan Kim <minchan@...nel.org>,
Anshuman Khandual <khandual@...ux.vnet.ibm.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>, kernel-team@....com,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...e.com>,
Vlastimil Babka <vbabka@...e.cz>,
"Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>
Subject: Re: [RFC 05/11] mm: make the try_to_munlock void function
On 03/07/2017 12:20 PM, Minchan Kim wrote:
> Hi Anshuman,
>
> On Mon, Mar 06, 2017 at 03:10:17PM +0530, Anshuman Khandual wrote:
>> On 03/06/2017 07:39 AM, Minchan Kim wrote:
>>> On Fri, Mar 03, 2017 at 05:13:54PM +0530, Anshuman Khandual wrote:
>>>> On 03/02/2017 12:09 PM, Minchan Kim wrote:
>>>>> try_to_munlock returns SWAP_MLOCK if the one of VMAs mapped
>>>>> the page has VM_LOCKED flag. In that time, VM set PG_mlocked to
>>>>> the page if the page is not pte-mapped THP which cannot be
>>>>> mlocked, either.
>>>> Right.
>>>>
>>>>> With that, __munlock_isolated_page can use PageMlocked to check
>>>>> whether try_to_munlock is successful or not without relying on
>>>>> try_to_munlock's retval. It helps to make ttu/ttuo simple with
>>>>> upcoming patches.
>>>> Right.
>>>>
>>>>> Cc: Vlastimil Babka <vbabka@...e.cz>
>>>>> Cc: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
>>>>> Signed-off-by: Minchan Kim <minchan@...nel.org>
>>>>> ---
>>>>> include/linux/rmap.h | 2 +-
>>>>> mm/mlock.c | 6 ++----
>>>>> mm/rmap.c | 16 ++++------------
>>>>> 3 files changed, 7 insertions(+), 17 deletions(-)
>>>>>
>>>>> diff --git a/include/linux/rmap.h b/include/linux/rmap.h
>>>>> index b556eef..1b0cd4c 100644
>>>>> --- a/include/linux/rmap.h
>>>>> +++ b/include/linux/rmap.h
>>>>> @@ -235,7 +235,7 @@ int page_mkclean(struct page *);
>>>>> * called in munlock()/munmap() path to check for other vmas holding
>>>>> * the page mlocked.
>>>>> */
>>>>> -int try_to_munlock(struct page *);
>>>>> +void try_to_munlock(struct page *);
>>>>>
>>>>> void remove_migration_ptes(struct page *old, struct page *new, bool locked);
>>>>>
>>>>> diff --git a/mm/mlock.c b/mm/mlock.c
>>>>> index cdbed8a..d34a540 100644
>>>>> --- a/mm/mlock.c
>>>>> +++ b/mm/mlock.c
>>>>> @@ -122,17 +122,15 @@ static bool __munlock_isolate_lru_page(struct page *page, bool getpage)
>>>>> */
>>>>> static void __munlock_isolated_page(struct page *page)
>>>>> {
>>>>> - int ret = SWAP_AGAIN;
>>>>> -
>>>>> /*
>>>>> * Optimization: if the page was mapped just once, that's our mapping
>>>>> * and we don't need to check all the other vmas.
>>>>> */
>>>>> if (page_mapcount(page) > 1)
>>>>> - ret = try_to_munlock(page);
>>>>> + try_to_munlock(page);
>>>>>
>>>>> /* Did try_to_unlock() succeed or punt? */
>>>>> - if (ret != SWAP_MLOCK)
>>>>> + if (!PageMlocked(page))
>>>> Checks if the page is still mlocked or not.
>>>>
>>>>> count_vm_event(UNEVICTABLE_PGMUNLOCKED);
>>>>>
>>>>> putback_lru_page(page);
>>>>> diff --git a/mm/rmap.c b/mm/rmap.c
>>>>> index 0a48958..61ae694 100644
>>>>> --- a/mm/rmap.c
>>>>> +++ b/mm/rmap.c
>>>>> @@ -1540,18 +1540,10 @@ static int page_not_mapped(struct page *page)
>>>>> * Called from munlock code. Checks all of the VMAs mapping the page
>>>>> * to make sure nobody else has this page mlocked. The page will be
>>>>> * returned with PG_mlocked cleared if no other vmas have it mlocked.
>>>>> - *
>>>>> - * Return values are:
>>>>> - *
>>>>> - * SWAP_AGAIN - no vma is holding page mlocked, or,
>>>>> - * SWAP_AGAIN - page mapped in mlocked vma -- couldn't acquire mmap sem
>>>>> - * SWAP_FAIL - page cannot be located at present
>>>>> - * SWAP_MLOCK - page is now mlocked.
>>>>> */
>>>>> -int try_to_munlock(struct page *page)
>>>>> -{
>>>>> - int ret;
>>>>>
>>>>> +void try_to_munlock(struct page *page)
>>>>> +{
>>>>> struct rmap_walk_control rwc = {
>>>>> .rmap_one = try_to_unmap_one,
>>>>> .arg = (void *)TTU_MUNLOCK,
>>>>> @@ -1561,9 +1553,9 @@ int try_to_munlock(struct page *page)
>>>>> };
>>>>>
>>>>> VM_BUG_ON_PAGE(!PageLocked(page) || PageLRU(page), page);
>>>>> + VM_BUG_ON_PAGE(PageMlocked(page), page);
>>>> We are calling on the page to see if its mlocked from any of it's
>>>> mapping VMAs. Then it is a possibility that the page is mlocked
>>>> and the above condition is true and we print VM BUG report there.
>>>> The point is if its a valid possibility why we have added the
>>>> above check ?
>>> If I read code properly, __munlock_isolated_page calls try_to_munlock
>>> always pass the TestClearPageMlocked page to try_to_munlock.
>> Right.
>>
>>> (e.g., munlock_vma_page and __munlock_pagevec) so I thought
>>> try_to_munlock should be called non-PG_mlocked page and try_to_unmap_one
>>> returns PG_mlocked page once it found a VM_LOCKED VMA for a page.
>>> IOW, non-PG_mlocked page is precondition for try_to_munlock.
>> Okay, I have missed that part. Nonetheless this is a separate issue,
>> should be part of a different patch ? Not inside these cleanups.
> If that precondition is not true, this patch changes the behavior
> slightly.
>
> UNEVICTABLE_PGMUNLOCKED count mistmatch compared to old.
>
> I wanted to catch it up. If you still think it's separate issue,
> I will do. Please tell me. However, I still think it's no problem
> to merge it in this clean up patch.
Got it, its okay. Let this change be part of this patch itself.
Powered by blists - more mailing lists