[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <adac1493-da6d-4581-b8b1-e5911b68cdd7@arm.com>
Date: Mon, 17 Jul 2023 16:49:19 +0100
From: Ryan Roberts <ryan.roberts@....com>
To: Matthew Wilcox <willy@...radead.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Yin Fengwei <fengwei.yin@...el.com>,
David Hildenbrand <david@...hat.com>,
Yu Zhao <yuzhao@...gle.com>, Yang Shi <shy828301@...il.com>,
"Huang, Ying" <ying.huang@...el.com>, Zi Yan <ziy@...dia.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH v1 2/3] mm: Implement folio_remove_rmap_range()
On 17/07/2023 16:07, Matthew Wilcox wrote:
> On Mon, Jul 17, 2023 at 03:31:09PM +0100, Ryan Roberts wrote:
>> +/*
>> + * folio_remove_rmap_range - take down pte mappings from a range of pages
>> + * belonging to a folio. All pages are accounted as small pages.
>> + * @folio: folio that all pages belong to
>> + * @page: first page in range to remove mapping from
>> + * @nr: number of pages in range to remove mapping from
>> + * @vma: the vm area from which the mapping is removed
>> + *
>> + * The caller needs to hold the pte lock.
>> + */
>
> This could stand a little reworking. How about this?
>
> /**
> * folio_remove_rmap_range - Take down PTE mappings from a range of pages.
> * @folio: Folio containing all pages in range.
> * @page: First page in range to unmap.
> * @nr: Number of pages to unmap.
> * @vma: The VM area containing the range.
> *
> * All pages in the range must belong to the same VMA & folio. They
> * must be mapped with PTEs, not a PMD.
> *
> * Context: Caller holds the pte lock.
> */
LGTM! thanks.
>
>> +void folio_remove_rmap_range(struct folio *folio, struct page *page,
>> + int nr, struct vm_area_struct *vma)
>> +{
>> + atomic_t *mapped = &folio->_nr_pages_mapped;
>> + int nr_unmapped = 0;
>> + int nr_mapped;
>> + bool last;
>> + enum node_stat_item idx;
>> +
>> + if (unlikely(folio_test_hugetlb(folio))) {
>> + VM_WARN_ON_FOLIO(1, folio);
>> + return;
>> + }
>> +
>> + if (!folio_test_large(folio)) {
>> + /* Is this the page's last map to be removed? */
>> + last = atomic_add_negative(-1, &page->_mapcount);
>> + nr_unmapped = last;
>> + } else {
>> + for (; nr != 0; nr--, page++) {
>> + /* Is this the page's last map to be removed? */
>> + last = atomic_add_negative(-1, &page->_mapcount);
>> + if (last) {
>> + /* Page still mapped if folio mapped entirely */
>> + nr_mapped = atomic_dec_return_relaxed(mapped);
>
> We're still doing one atomic op per page on the folio's nr_pages_mapped
> ... is it possible to batch this and use atomic_sub_return_relaxed()?
Good spot, something like this:
} else {
for (; nr != 0; nr--, page++) {
/* Is this the page's last map to be removed? */
last = atomic_add_negative(-1, &page->_mapcount);
if (last)
nr_unmapped++;
}
/* Pages still mapped if folio mapped entirely */
nr_mapped = atomic_sub_return_relaxed(nr_unmapped, mapped);
if (nr_mapped >= COMPOUND_MAPPED)
nr_unmapped = 0;
}
Powered by blists - more mailing lists