[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <2171f0a9-d01a-e863-2009-3f1bfa249d6c@linux.alibaba.com>
Date: Fri, 25 Oct 2019 11:49:26 -0700
From: Yang Shi <yang.shi@...ux.alibaba.com>
To: "Kirill A. Shutemov" <kirill@...temov.name>
Cc: hughd@...gle.com, kirill.shutemov@...ux.intel.com,
aarcange@...hat.com, akpm@...ux-foundation.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm: thp: clear PageDoubleMap flag when the last PMD map
gone
On 10/25/19 9:39 AM, Kirill A. Shutemov wrote:
> On Fri, Oct 25, 2019 at 07:32:33PM +0300, Kirill A. Shutemov wrote:
>> On Fri, Oct 25, 2019 at 08:58:22AM -0700, Yang Shi wrote:
>>>
>>> On 10/25/19 8:36 AM, Kirill A. Shutemov wrote:
>>>> On Fri, Oct 25, 2019 at 01:27:46AM +0800, Yang Shi wrote:
>>>>> File THP sets PageDoubleMap flag when the first it gets PTE mapped, but
>>>>> the flag is never cleared until the THP is freed. This result in
>>>>> unbalanced state although it is not a big deal.
>>>>>
>>>>> Clear the flag when the last compound_mapcount is gone. It should be
>>>>> cleared when all the PTE maps are gone (become PMD mapped only) as well,
>>>>> but this needs check all subpage's _mapcount every time any subpage's
>>>>> rmap is removed, the overhead may be not worth. The anonymous THP also
>>>>> just clears PageDoubleMap flag when the last PMD map is gone.
>>>> NAK, sorry.
>>>>
>>>> The key difference with anon THP that file THP can be mapped again with
>>>> PMD after all PMD (or all) mappings are gone.
>>>>
>>>> Your patch breaks the case when you map the page with PMD again while the
>>>> page is still mapped with PTEs. Who would set PageDoubleMap() in this
>>>> case?
>>> Aha, yes, you are right. I missed that point. However, I'm wondering we
>>> might move this up a little bit like this:
>>>
>>> diff --git a/mm/rmap.c b/mm/rmap.c
>>> index d17cbf3..ac046fd 100644
>>> --- a/mm/rmap.c
>>> +++ b/mm/rmap.c
>>> @@ -1230,15 +1230,17 @@ static void page_remove_file_rmap(struct page *page,
>>> bool compound)
>>> if (atomic_add_negative(-1, &page[i]._mapcount))
>>> nr++;
>>> }
>>> +
>>> + /* No PTE map anymore */
>>> + if (nr == HPAGE_PMD_NR)
>>> + ClearPageDoubleMap(compound_head(page));
>>> +
>>> if (!atomic_add_negative(-1, compound_mapcount_ptr(page)))
>>> goto out;
>>> if (PageSwapBacked(page))
>>> __dec_node_page_state(page, NR_SHMEM_PMDMAPPED);
>>> else
>>> __dec_node_page_state(page, NR_FILE_PMDMAPPED);
>>> -
>>> - /* The last PMD map is gone */
>>> - ClearPageDoubleMap(compound_head(page));
>>> } else {
>>> if (!atomic_add_negative(-1, &page->_mapcount))
>>> goto out;
>>>
>>>
>>> This should guarantee no PTE map anymore, it should be safe to clear the
>>> flag.
>> At first glance looks safe, but let me think more about it. I didn't
>> expect it be that easy :P
> How do you protect from races? What prevents other thread/process to map
> the page as PTE after you've calculated 'nr'?
>
> I don't remember the code that well, but I believe we don't require
> PageLock for all cases... Or do we?
No, page lock is required by adding PTE rmap, but not required when
removing rmap, i.e. huge pmd split. It looks we can't prevent from the
races for processes, threads are protected by ptl.
>
Powered by blists - more mailing lists