[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7f3096c5-c3cc-4ead-7c5e-8bade6c930da@nvidia.com>
Date: Thu, 8 Dec 2022 16:24:19 -0800
From: John Hubbard <jhubbard@...dia.com>
To: Peter Xu <peterx@...hat.com>
CC: <linux-mm@...ck.org>, <linux-kernel@...r.kernel.org>,
Jann Horn <jannh@...gle.com>,
Andrea Arcangeli <aarcange@...hat.com>,
James Houghton <jthoughton@...gle.com>,
Rik van Riel <riel@...riel.com>,
Miaohe Lin <linmiaohe@...wei.com>,
Nadav Amit <nadav.amit@...il.com>,
Mike Kravetz <mike.kravetz@...cle.com>,
David Hildenbrand <david@...hat.com>,
"Andrew Morton" <akpm@...ux-foundation.org>,
Muchun Song <songmuchun@...edance.com>
Subject: Re: [PATCH v2 10/10] mm/hugetlb: Document why page_vma_mapped_walk()
is safe to walk
On 12/8/22 14:21, Peter Xu wrote:
>
> Firstly, this patch (to be squashed into previous) is trying to document
> page_vma_mapped_walk() on why it's not needed to further take any lock to
> call hugetlb_walk().
>
> To call hugetlb_walk() we need either of the locks listed below (in either
> read or write mode), according to the rules we setup for it in patch 3:
>
> (1) hugetlb vma lock
> (2) i_mmap_rwsem lock
>
> page_vma_mapped_walk() is called in below sites across the kernel:
>
> __replace_page[179] if (!page_vma_mapped_walk(&pvmw))
> __damon_pa_mkold[24] while (page_vma_mapped_walk(&pvmw)) {
> __damon_pa_young[97] while (page_vma_mapped_walk(&pvmw)) {
> write_protect_page[1065] if (!page_vma_mapped_walk(&pvmw))
> remove_migration_pte[179] while (page_vma_mapped_walk(&pvmw)) {
> page_idle_clear_pte_refs_one[56] while (page_vma_mapped_walk(&pvmw)) {
> page_mapped_in_vma[318] if (!page_vma_mapped_walk(&pvmw))
> folio_referenced_one[813] while (page_vma_mapped_walk(&pvmw)) {
> page_vma_mkclean_one[958] while (page_vma_mapped_walk(pvmw)) {
> try_to_unmap_one[1506] while (page_vma_mapped_walk(&pvmw)) {
> try_to_migrate_one[1881] while (page_vma_mapped_walk(&pvmw)) {
> page_make_device_exclusive_one[2205] while (page_vma_mapped_walk(&pvmw)) {
>
> If we group them, we can see that most of them are during a rmap walk
> (i.e., comes from a higher rmap_walk() stack), they are:
>
> __damon_pa_mkold[24] while (page_vma_mapped_walk(&pvmw)) {
> __damon_pa_young[97] while (page_vma_mapped_walk(&pvmw)) {
> remove_migration_pte[179] while (page_vma_mapped_walk(&pvmw)) {
> page_idle_clear_pte_refs_one[56] while (page_vma_mapped_walk(&pvmw)) {
> page_mapped_in_vma[318] if (!page_vma_mapped_walk(&pvmw))
> folio_referenced_one[813] while (page_vma_mapped_walk(&pvmw)) {
> page_vma_mkclean_one[958] while (page_vma_mapped_walk(pvmw)) {
> try_to_unmap_one[1506] while (page_vma_mapped_walk(&pvmw)) {
> try_to_migrate_one[1881] while (page_vma_mapped_walk(&pvmw)) {
> page_make_device_exclusive_one[2205] while (page_vma_mapped_walk(&pvmw)) {
>
> Let's call it case (A).
>
> We have another two special cases that are not during a rmap walk, they
> are:
>
> write_protect_page[1065] if (!page_vma_mapped_walk(&pvmw))
> __replace_page[179] if (!page_vma_mapped_walk(&pvmw))
>
> Let's call it case (B).
>
> Case (A) is always safe because it always take the i_mmap_rwsem lock in
> read mode. It's done in rmap_walk_file() where:
>
> if (!locked) {
> if (i_mmap_trylock_read(mapping))
> goto lookup;
>
> if (rwc->try_lock) {
> rwc->contended = true;
> return;
> }
>
> i_mmap_lock_read(mapping);
> }
>
> If locked==true it means the caller already holds the lock, so no need to
> take it. It justifies that all callers from rmap_walk() upon a hugetlb vma
> is safe to call hugetlb_walk() already according to the rule of hugetlb_walk().
>
> Case (B) contains two cases either in KSM path or uprobe path, and none of
> the paths (afaict) can get a hugetlb vma involved. IOW, the whole path of
>
> if (unlikely(is_vm_hugetlb_page(vma))) {
>
> In page_vma_mapped_walk() just should never trigger.
>
> To summarize above into a shorter paragraph, it'll become the comment.
>
> Hope it explains. Thanks.
>
It does! And now for the comment, I'll think you'll find that this suffices:
/*
* All callers that get here will already hold the i_mmap_rwsem.
* Therefore, no additional locks need to be taken before
* calling hugetlb_walk().
*/
...which, considering all the data above, is probably the mother of
all summaries. :) But really, it's all that people need to know here, and
it's readily understandable without wondering what KSM has to do with this,
for example.
thanks,
--
John Hubbard
NVIDIA
Powered by blists - more mailing lists