[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <01071d9c-483f-2d95-87a6-e1030acaf8dd@arm.com>
Date: Wed, 15 Mar 2023 16:08:03 +0000
From: Ryan Roberts <ryan.roberts@....com>
To: "Matthew Wilcox (Oracle)" <willy@...radead.org>,
linux-arch@...r.kernel.org
Cc: Yin Fengwei <fengwei.yin@...el.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v4 34/36] rmap: add folio_add_file_rmap_range()
On 15/03/2023 13:34, Ryan Roberts wrote:
> On 15/03/2023 05:14, Matthew Wilcox (Oracle) wrote:
>> From: Yin Fengwei <fengwei.yin@...el.com>
>>
>> folio_add_file_rmap_range() allows to add pte mapping to a specific
>> range of file folio. Comparing to page_add_file_rmap(), it batched
>> updates __lruvec_stat for large folio.
>>
>> Signed-off-by: Yin Fengwei <fengwei.yin@...el.com>
>> Signed-off-by: Matthew Wilcox (Oracle) <willy@...radead.org>
>> ---
>> include/linux/rmap.h | 2 ++
>> mm/rmap.c | 60 +++++++++++++++++++++++++++++++++-----------
>> 2 files changed, 48 insertions(+), 14 deletions(-)
>>
>> diff --git a/include/linux/rmap.h b/include/linux/rmap.h
>> index b87d01660412..a3825ce81102 100644
>> --- a/include/linux/rmap.h
>> +++ b/include/linux/rmap.h
>> @@ -198,6 +198,8 @@ void folio_add_new_anon_rmap(struct folio *, struct vm_area_struct *,
>> unsigned long address);
>> void page_add_file_rmap(struct page *, struct vm_area_struct *,
>> bool compound);
>> +void folio_add_file_rmap_range(struct folio *, struct page *, unsigned int nr,
>> + struct vm_area_struct *, bool compound);
>> void page_remove_rmap(struct page *, struct vm_area_struct *,
>> bool compound);
>>
>> diff --git a/mm/rmap.c b/mm/rmap.c
>> index 4898e10c569a..a91906b28835 100644
>> --- a/mm/rmap.c
>> +++ b/mm/rmap.c
>> @@ -1301,31 +1301,39 @@ void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma,
>> }
>>
>> /**
>> - * page_add_file_rmap - add pte mapping to a file page
>> - * @page: the page to add the mapping to
>> + * folio_add_file_rmap_range - add pte mapping to page range of a folio
>> + * @folio: The folio to add the mapping to
>> + * @page: The first page to add
>> + * @nr_pages: The number of pages which will be mapped
>> * @vma: the vm area in which the mapping is added
>> * @compound: charge the page as compound or small page
>> *
>> + * The page range of folio is defined by [first_page, first_page + nr_pages)
>> + *
>> * The caller needs to hold the pte lock.
>> */
>> -void page_add_file_rmap(struct page *page, struct vm_area_struct *vma,
>> - bool compound)
>> +void folio_add_file_rmap_range(struct folio *folio, struct page *page,
>> + unsigned int nr_pages, struct vm_area_struct *vma,
>> + bool compound)
>> {
>> - struct folio *folio = page_folio(page);
>> atomic_t *mapped = &folio->_nr_pages_mapped;
>> - int nr = 0, nr_pmdmapped = 0;
>> - bool first;
>> + unsigned int nr_pmdmapped = 0, first;
>> + int nr = 0;
>>
>> - VM_BUG_ON_PAGE(compound && !PageTransHuge(page), page);
>> + VM_WARN_ON_FOLIO(compound && !folio_test_pmd_mappable(folio), folio);
>>
>> /* Is page being mapped by PTE? Is this its first map to be added? */
>> if (likely(!compound)) {
>> - first = atomic_inc_and_test(&page->_mapcount);
>> - nr = first;
>> - if (first && folio_test_large(folio)) {
>> - nr = atomic_inc_return_relaxed(mapped);
>> - nr = (nr < COMPOUND_MAPPED);
>> - }
>> + do {
>> + first = atomic_inc_and_test(&page->_mapcount);
>> + if (first && folio_test_large(folio)) {
>> + first = atomic_inc_return_relaxed(mapped);
>> + first = (nr < COMPOUND_MAPPED);
>
> This still contains the typo that Yin Fengwei spotted in the previous version:
> https://lore.kernel.org/linux-mm/20230228213738.272178-1-willy@infradead.org/T/#m84673899e25bc31356093a1177941f2cc35e5da8
>
> FYI, I'm seeing a perf regression of about 1% when compiling the kernel on
> Ampere Altra (arm64) with this whole series on top of v6.3-rc1 (In a VM using
> ext4 filesystem). Looks like instruction aborts are taking much longer and a
> selection of syscalls are a bit slower. Still hunting down the root cause. Will
> report once I have conclusive diagnosis.
I'm sorry - I'm struggling to find the exact cause. But its spending over 2x the
amount of time in the instruction abort handling code once patches 32-36 are
included. Everything in the flame graph is just taking longer. Perhaps we are
getting more instruction aborts somehow? I have the flamegraphs if anyone wants
them - just shout and I'll email them separately.
>
> Thanks,
> Ryan
>
>
>> + }
>> +
>> + if (first)
>> + nr++;
>> + } while (page++, --nr_pages > 0);
>> } else if (folio_test_pmd_mappable(folio)) {
>> /* That test is redundant: it's for safety or to optimize out */
>>
>> @@ -1354,6 +1362,30 @@ void page_add_file_rmap(struct page *page, struct vm_area_struct *vma,
>> mlock_vma_folio(folio, vma, compound);
>> }
>>
>> +/**
>> + * page_add_file_rmap - add pte mapping to a file page
>> + * @page: the page to add the mapping to
>> + * @vma: the vm area in which the mapping is added
>> + * @compound: charge the page as compound or small page
>> + *
>> + * The caller needs to hold the pte lock.
>> + */
>> +void page_add_file_rmap(struct page *page, struct vm_area_struct *vma,
>> + bool compound)
>> +{
>> + struct folio *folio = page_folio(page);
>> + unsigned int nr_pages;
>> +
>> + VM_WARN_ON_ONCE_PAGE(compound && !PageTransHuge(page), page);
>> +
>> + if (likely(!compound))
>> + nr_pages = 1;
>> + else
>> + nr_pages = folio_nr_pages(folio);
>> +
>> + folio_add_file_rmap_range(folio, page, nr_pages, vma, compound);
>> +}
>> +
>> /**
>> * page_remove_rmap - take down pte mapping from a page
>> * @page: page to remove mapping from
>
Powered by blists - more mailing lists