[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <58be47c1-c34e-46a2-a32b-a993f7dee664@redhat.com>
Date: Tue, 5 Dec 2023 14:09:24 +0100
From: David Hildenbrand <david@...hat.com>
To: Ryan Roberts <ryan.roberts@....com>, linux-kernel@...r.kernel.org
Cc: linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
"Matthew Wilcox (Oracle)" <willy@...radead.org>,
Hugh Dickins <hughd@...gle.com>,
Yin Fengwei <fengwei.yin@...el.com>,
Mike Kravetz <mike.kravetz@...cle.com>,
Muchun Song <muchun.song@...ux.dev>,
Peter Xu <peterx@...hat.com>
Subject: Re: [PATCH RFC 23/39] mm/rmap: introduce
folio_remove_rmap_[pte|ptes|pmd]()
>> +static __always_inline void __folio_remove_rmap(struct folio *folio,
>> + struct page *page, unsigned int nr_pages,
>> + struct vm_area_struct *vma, enum rmap_mode mode)
>> +{
>> atomic_t *mapped = &folio->_nr_pages_mapped;
>> - int nr = 0, nr_pmdmapped = 0;
>> - bool last;
>> + int last, nr = 0, nr_pmdmapped = 0;
>
> nit: you're being inconsistent across the functions with signed vs unsigned for
> page counts (e.g. nr, nr_pmdmapped) - see __folio_add_rmap(),
> __folio_add_file_rmap(), __folio_add_anon_rmap().
>
Definitely.
> I suggest pick one and stick to it. Personally I'd go with signed int (since
> that's what all the counters in struct folio that we are manipulating are,
> underneath the atomic_t) then check that nr_pages > 0 in
> __folio_rmap_sanity_checks().
Can do, but note that the counters are signed to detect udnerflows. It
doesn't make sense here to pass a negative number.
>
>> enum node_stat_item idx;
>>
>> - VM_WARN_ON_FOLIO(folio_test_hugetlb(folio), folio);
>> - VM_BUG_ON_PAGE(compound && !PageHead(page), page);
>> + __folio_rmap_sanity_checks(folio, page, nr_pages, mode);
>>
>> /* Is page being unmapped by PTE? Is this its last map to be removed? */
>> - if (likely(!compound)) {
>> - last = atomic_add_negative(-1, &page->_mapcount);
>> - nr = last;
>> - if (last && folio_test_large(folio)) {
>> - nr = atomic_dec_return_relaxed(mapped);
>> - nr = (nr < COMPOUND_MAPPED);
>> - }
>> - } else if (folio_test_pmd_mappable(folio)) {
>> - /* That test is redundant: it's for safety or to optimize out */
>> + if (likely(mode == RMAP_MODE_PTE)) {
>> + do {
>> + last = atomic_add_negative(-1, &page->_mapcount);
>> + if (last && folio_test_large(folio)) {
>> + last = atomic_dec_return_relaxed(mapped);
>> + last = (last < COMPOUND_MAPPED);
>> + }
>>
>> + if (last)
>> + nr++;
>> + } while (page++, --nr_pages > 0);
>> + } else if (mode == RMAP_MODE_PMD) {
>> last = atomic_add_negative(-1, &folio->_entire_mapcount);
>> if (last) {
>> nr = atomic_sub_return_relaxed(COMPOUND_MAPPED, mapped);
>> @@ -1517,7 +1528,7 @@ void page_remove_rmap(struct page *page, struct vm_area_struct *vma,
>> * is still mapped.
>> */
>> if (folio_test_pmd_mappable(folio) && folio_test_anon(folio))
>
> folio_test_pmd_mappable() -> folio_test_large()
>
> Since you're converting this to support batch PTE removal, it might as well also
> support smaller-than-pmd too?
I remember that you have a patch for that, right? :)
>
> I currently have a patch to do this same change in the multi-size THP series.
>
Ah, yes, and that should go in first.
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists