[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <70cc29f2-6018-4794-ace9-d96077cfed6a@intel.com>
Date: Wed, 6 Dec 2023 09:22:37 +0800
From: Yin Fengwei <fengwei.yin@...el.com>
To: David Hildenbrand <david@...hat.com>,
<linux-kernel@...r.kernel.org>
CC: <linux-mm@...ck.org>, Andrew Morton <akpm@...ux-foundation.org>,
"Matthew Wilcox (Oracle)" <willy@...radead.org>,
Hugh Dickins <hughd@...gle.com>,
"Ryan Roberts" <ryan.roberts@....com>,
Mike Kravetz <mike.kravetz@...cle.com>,
Muchun Song <muchun.song@...ux.dev>,
Peter Xu <peterx@...hat.com>
Subject: Re: [PATCH RFC 03/39] mm/rmap: introduce and use
hugetlb_add_file_rmap()
On 12/4/23 22:21, David Hildenbrand wrote:
> hugetlb rmap handling differs quite a lot from "ordinary" rmap code.
> For example, hugetlb currently only supports entire mappings, and treats
> any mapping as mapped using a single "logical PTE". Let's move it out
> of the way so we can overhaul our "ordinary" rmap.
> implementation/interface.
>
> Right now we're using page_dup_file_rmap() in some cases where "ordinary"
> rmap code would have used page_add_file_rmap(). So let's introduce and
> use hugetlb_add_file_rmap() instead. We won't be adding a
> "hugetlb_dup_file_rmap()" functon for the fork() case, as it would be
> doing the same: "dup" is just an optimization for "add".
>
> What remains is a single page_dup_file_rmap() call in fork() code.
>
> Signed-off-by: David Hildenbrand <david@...hat.com>
Reviewed-by: Yin Fengwei <fengwei.yin@...el.com>
> ---
> include/linux/rmap.h | 7 +++++++
> mm/hugetlb.c | 6 +++---
> mm/migrate.c | 2 +-
> 3 files changed, 11 insertions(+), 4 deletions(-)
>
> diff --git a/include/linux/rmap.h b/include/linux/rmap.h
> index e8d1dc1d5361f..0a81e8420a961 100644
> --- a/include/linux/rmap.h
> +++ b/include/linux/rmap.h
> @@ -208,6 +208,13 @@ void hugetlb_add_anon_rmap(struct folio *, struct vm_area_struct *,
> void hugetlb_add_new_anon_rmap(struct folio *, struct vm_area_struct *,
> unsigned long address);
>
> +static inline void hugetlb_add_file_rmap(struct folio *folio)
> +{
> + VM_WARN_ON_FOLIO(folio_test_anon(folio), folio);
> +
> + atomic_inc(&folio->_entire_mapcount);
> +}
> +
> static inline void hugetlb_remove_rmap(struct folio *folio)
> {
> atomic_dec(&folio->_entire_mapcount);
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index d17bb53b19ff2..541a8f38cfdc7 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -5401,7 +5401,7 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
> * sleep during the process.
> */
> if (!folio_test_anon(pte_folio)) {
> - page_dup_file_rmap(&pte_folio->page, true);
> + hugetlb_add_file_rmap(pte_folio);
> } else if (page_try_dup_anon_rmap(&pte_folio->page,
> true, src_vma)) {
> pte_t src_pte_old = entry;
> @@ -6272,7 +6272,7 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm,
> if (anon_rmap)
> hugetlb_add_new_anon_rmap(folio, vma, haddr);
> else
> - page_dup_file_rmap(&folio->page, true);
> + hugetlb_add_file_rmap(folio);
> new_pte = make_huge_pte(vma, &folio->page, ((vma->vm_flags & VM_WRITE)
> && (vma->vm_flags & VM_SHARED)));
> /*
> @@ -6723,7 +6723,7 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte,
> goto out_release_unlock;
>
> if (folio_in_pagecache)
> - page_dup_file_rmap(&folio->page, true);
> + hugetlb_add_file_rmap(folio);
> else
> hugetlb_add_new_anon_rmap(folio, dst_vma, dst_addr);
>
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 4cb849fa0dd2c..de9d94b99ab78 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -252,7 +252,7 @@ static bool remove_migration_pte(struct folio *folio,
> hugetlb_add_anon_rmap(folio, vma, pvmw.address,
> rmap_flags);
> else
> - page_dup_file_rmap(new, true);
> + hugetlb_add_file_rmap(folio);
> set_huge_pte_at(vma->vm_mm, pvmw.address, pvmw.pte, pte,
> psize);
> } else
Powered by blists - more mailing lists