lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 28 Nov 2023 21:32:34 +0100
From:   David Hildenbrand <david@...hat.com>
To:     Peter Xu <peterx@...hat.com>
Cc:     linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        Andrew Morton <akpm@...ux-foundation.org>,
        Mike Kravetz <mike.kravetz@...cle.com>,
        Muchun Song <muchun.song@...ux.dev>
Subject: Re: [PATCH v1 2/5] mm/rmap: introduce and use hugetlb_remove_rmap()

>> What doesn't make sense is using patterns like:
>>
>> 	page_remove_rmap(subpage, vma, folio_test_hugetlb(folio));
>>
>> and then, inside page_remove_rmap() have an initial folio_test_hugetlb()
>> check that does something completely different.
> 
> IIUC above "folio_test_hugetlb(folio)" pattern can become "false" for
> hugetlb, if we decided to do mapcount for small hugetlb mappings.  If that
> happens, I think something like this patch _may_ need to be reverted again
> more or less. Or we start to copy some of page_remove_rmap() into the new
> hugetlb rmap api.

Probably just extend it to a hugetlb rmap api if you want to do some 
batching. But who knows how that will actually look like. The 
migrate.c/rmap.c special casing for hugetlb when migrating/unmapping is 
quite severe already and this is just the tip of the iceberg.

Further, the question is if you really want to deal with subpage 
mapcounts like the !hugetlb variant does. Quite likely you don't want to 
do that if you can avoid it and just have a total mapcount.

> 
>>
>> So each and everyone calling page_remove_rmap (and knowing that it's
>> certainly not a hugetlb folio) has to run through that check.
> 
> Note that right after this change applied, hugetlb already start to lose
> something in common with generic compound folios, where page_remove_rmap()
> had:
> 
> 	VM_BUG_ON_PAGE(compound && !PageHead(page), page);
> 
> That sanity check goes away immediately for hugetlb, which is still
> logically applicable.

Why? We're passing folios that the caller already properly prepared.

A folio by definition points to the head page.

> 
>>
>> Then, we have functions like page_add_file_rmap() that look like they can be
>> used for hugetlb, but hugetlb is smart enough and only calls
>> page_dup_file_rmap(), because it doesn't want to touch any file/anon
>> counters. And to handle that we would have to add folio_test_hugetlb()
>> inside there, which adds the same as above, trying to unify without
>> unifying.
>>
>> Once we're in the area of folio_add_file_rmap_range(), it all stops making
>> sense, because there is no way we could possibly partially map a folio
>> today. (and if we can in the future, we might still want separate handling,
>> because most caller know with which pages they are dealing, below)
>>
>> Last but not least, it's all inconsistent right now with
>> hugetlb_add_anon_rmap()/hugetlb_add_new_anon_rmap() being there because they
>> differ reasonably well from the "ordinary" counterparts.
> 
> Taking hugepage_add_new_anon_rmap() as example, IMHO they still share a lot
> of things with !hugetlb codes, and maybe they can already be cleaned up
> into something common for a large mapping:
> 
> void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma,
> 		unsigned long address)
> {
> 	int nr;
> 
> 	VM_BUG_ON_VMA(address < vma->vm_start || address >= vma->vm_end, vma);
> 
>          if (folio_is_hugetlb(folio)) {
>                  folio_clear_hugetlb_restore_reserve(folio);
>          } else {
>                  __folio_set_swapbacked(folio);
> 		atomic_set(&folio->_nr_pages_mapped, COMPOUND_MAPPED);
> 		nr = folio_nr_pages(folio);
> 		__lruvec_stat_mod_folio(folio, NR_ANON_THPS, nr);
>                  __lruvec_stat_mod_folio(folio, NR_ANON_MAPPED, nr);
>          }
> 
>          /* increment count (starts at -1) */
>          atomic_set(&folio->_entire_mapcount, 0);
> 	__folio_set_anon(folio, vma, address, true);
> 	SetPageAnonExclusive(&folio->page);
> }

Note that function is about to be extended to be more !hugetlb like in 
Ryans work (smaller THP).

> 
> For folio_add_file_rmap_range(): would it work if it just pass the hugetlb
> folio range into it?  Would that make it much more difficult with the
> recent work on large folios from you or anyone?

Sure we can, that doesn't mean that it's a good interface for hugetlb 
that doesn't support ranges right now and maybe never will. :)

> 
>> I don't think going in the other direction and e.g., removing
>> hugetlb_add_anon_rmap / hugetlb_add_new_anon_rmap is making a unification
>> that is not really reasonable. It will only make things slower and the
>> individual functions more complicated.
> 
> IIUC so far the performance difference should be minimum on which helper to
> use.

I've learned that even the likely in "if (likely(compound))" matters in 
some rmap function.

So I'm still against effectively adding more folio_is_hugetlb() checks 
on hot code paths for the sake of "unification" that I don't really see 
as "unification".

> 
> As I mentioned, I sincerely don't know whether this patch is good or not as
> I don't know enough on everything around that is happening.  It's just that
> I'll still think twice if to create hugetlb its own rmap API, because from
> the high level it's going the other way round to me.  So I still want to
> raise this as a pure question.

I consider the end result here easier to read and easier to extend. If 
there are good reasons to go the different way, I'm all ears.

-- 
Cheers,

David / dhildenb

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ