[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <fea2113f-1ac6-4968-88a6-674ef7800ef2@arm.com>
Date: Mon, 11 Dec 2023 16:29:06 +0000
From: Ryan Roberts <ryan.roberts@....com>
To: David Hildenbrand <david@...hat.com>, linux-kernel@...r.kernel.org
Cc: linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
"Matthew Wilcox (Oracle)" <willy@...radead.org>,
Hugh Dickins <hughd@...gle.com>,
Yin Fengwei <fengwei.yin@...el.com>,
Mike Kravetz <mike.kravetz@...cle.com>,
Muchun Song <muchun.song@...ux.dev>,
Peter Xu <peterx@...hat.com>
Subject: Re: [PATCH v1 05/39] mm/rmap: introduce and use
hugetlb_try_share_anon_rmap()
On 11/12/2023 15:56, David Hildenbrand wrote:
> hugetlb rmap handling differs quite a lot from "ordinary" rmap code.
> For example, hugetlb currently only supports entire mappings, and treats
> any mapping as mapped using a single "logical PTE". Let's move it out
> of the way so we can overhaul our "ordinary" rmap.
> implementation/interface.
>
> So let's introduce and use hugetlb_try_dup_anon_rmap() to make all
> hugetlb handling use dedicated hugetlb_* rmap functions.
>
> Note that try_to_unmap_one() does not need care. Easy to spot because
> among all that nasty hugetlb special-casing in that function, we're not
> using set_huge_pte_at() on the anon path -- well, and that code assumes
> that we would want to swapout.
>
> Reviewed-by: Yin Fengwei <fengwei.yin@...el.com>
> Signed-off-by: David Hildenbrand <david@...hat.com>
Reviewed-by: Ryan Roberts <ryan.roberts@....com>
> ---
> include/linux/rmap.h | 23 +++++++++++++++++++++++
> mm/rmap.c | 15 ++++++++++-----
> 2 files changed, 33 insertions(+), 5 deletions(-)
>
> diff --git a/include/linux/rmap.h b/include/linux/rmap.h
> index ca42b3db5688..4c0650e9f6db 100644
> --- a/include/linux/rmap.h
> +++ b/include/linux/rmap.h
> @@ -228,6 +228,29 @@ static inline int hugetlb_try_dup_anon_rmap(struct folio *folio,
> return 0;
> }
>
> +/* See page_try_share_anon_rmap() */
> +static inline int hugetlb_try_share_anon_rmap(struct folio *folio)
> +{
> + VM_WARN_ON_FOLIO(!folio_test_anon(folio), folio);
> + VM_WARN_ON_FOLIO(!PageAnonExclusive(&folio->page), folio);
> +
> + /* Paired with the memory barrier in try_grab_folio(). */
> + if (IS_ENABLED(CONFIG_HAVE_FAST_GUP))
> + smp_mb();
> +
> + if (unlikely(folio_maybe_dma_pinned(folio)))
> + return -EBUSY;
> + ClearPageAnonExclusive(&folio->page);
> +
> + /*
> + * This is conceptually a smp_wmb() paired with the smp_rmb() in
> + * gup_must_unshare().
> + */
> + if (IS_ENABLED(CONFIG_HAVE_FAST_GUP))
> + smp_mb__after_atomic();
> + return 0;
> +}
> +
> static inline void hugetlb_add_file_rmap(struct folio *folio)
> {
> VM_WARN_ON_FOLIO(folio_test_anon(folio), folio);
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 4e60c1f38eaa..e210ac1b73de 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -2147,13 +2147,18 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
> !anon_exclusive, subpage);
>
> /* See page_try_share_anon_rmap(): clear PTE first. */
> - if (anon_exclusive &&
> - page_try_share_anon_rmap(subpage)) {
> - if (folio_test_hugetlb(folio))
> + if (folio_test_hugetlb(folio)) {
> + if (anon_exclusive &&
> + hugetlb_try_share_anon_rmap(folio)) {
> set_huge_pte_at(mm, address, pvmw.pte,
> pteval, hsz);
> - else
> - set_pte_at(mm, address, pvmw.pte, pteval);
> + ret = false;
> + page_vma_mapped_walk_done(&pvmw);
> + break;
> + }
> + } else if (anon_exclusive &&
> + page_try_share_anon_rmap(subpage)) {
> + set_pte_at(mm, address, pvmw.pte, pteval);
> ret = false;
> page_vma_mapped_walk_done(&pvmw);
> break;
Powered by blists - more mailing lists