[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <45d0aa4c-e438-476e-a0b2-a129ba1975b4@arm.com>
Date: Mon, 27 Nov 2023 11:30:31 +0000
From: Ryan Roberts <ryan.roberts@....com>
To: Barry Song <21cnbao@...il.com>
Cc: akpm@...ux-foundation.org, anshuman.khandual@....com,
catalin.marinas@....com, david@...hat.com, fengwei.yin@...el.com,
hughd@...gle.com, itaru.kitayama@...il.com, jhubbard@...dia.com,
kirill.shutemov@...ux.intel.com,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, mcgrof@...nel.org, rientjes@...gle.com,
shy828301@...il.com, vbabka@...e.cz, wangkefeng.wang@...wei.com,
willy@...radead.org, ying.huang@...el.com, yuzhao@...gle.com,
ziy@...dia.com
Subject: Re: [RESEND PATCH v7 02/10] mm: Non-pmd-mappable, large folios for
folio_add_new_anon_rmap()
On 27/11/2023 04:36, Barry Song wrote:
>> void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma,
>> unsigned long address)
>> {
>> - int nr;
>> + int nr = folio_nr_pages(folio);
>>
>> - VM_BUG_ON_VMA(address < vma->vm_start || address >= vma->vm_end, vma);
>> + VM_BUG_ON_VMA(address < vma->vm_start ||
>> + address + (nr << PAGE_SHIFT) > vma->vm_end, vma);
>> __folio_set_swapbacked(folio);
>> + __folio_set_anon(folio, vma, address, true);
>>
>> - if (likely(!folio_test_pmd_mappable(folio))) {
>> + if (likely(!folio_test_large(folio))) {
>> /* increment count (starts at -1) */
>> atomic_set(&folio->_mapcount, 0);
>> - nr = 1;
>> + SetPageAnonExclusive(&folio->page);
>> + } else if (!folio_test_pmd_mappable(folio)) {
>> + int i;
>> +
>> + for (i = 0; i < nr; i++) {
>> + struct page *page = folio_page(folio, i);
>> +
>> + /* increment count (starts at -1) */
>> + atomic_set(&page->_mapcount, 0);
>> + SetPageAnonExclusive(page);
>
> Hi Ryan,
>
> we are doing an entire mapping, right? what is the reason to
> increase mapcount for each subpage? shouldn't we only increase
> mapcount of subpage in either split or doublemap case?
>
> in page_add_anon_rmap(), are we also increasing mapcount of
> each subpage for fork() case where the entire large folio
> is inheritted by child processes?
I think this is all answered by the conversation we just had in the context of
the contpte series? Let me know if you still have concerns.
>
>> + }
>> +
>> + atomic_set(&folio->_nr_pages_mapped, nr);
>> } else {
>> /* increment count (starts at -1) */
>> atomic_set(&folio->_entire_mapcount, 0);
>> atomic_set(&folio->_nr_pages_mapped, COMPOUND_MAPPED);
>> - nr = folio_nr_pages(folio);
>> + SetPageAnonExclusive(&folio->page);
>> __lruvec_stat_mod_folio(folio, NR_ANON_THPS, nr);
>> }
>>
>> __lruvec_stat_mod_folio(folio, NR_ANON_MAPPED, nr);
>> - __folio_set_anon(folio, vma, address, true);
>> - SetPageAnonExclusive(&folio->page);
>> }
>
> Thanks
> Barry
>
Powered by blists - more mailing lists