[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230602015408.376149-2-zhangpeng362@huawei.com>
Date: Fri, 2 Jun 2023 09:54:07 +0800
From: Peng Zhang <zhangpeng362@...wei.com>
To: <linux-mm@...ck.org>, <linux-kernel@...r.kernel.org>,
<akpm@...ux-foundation.org>, <willy@...radead.org>,
<mike.kravetz@...cle.com>
CC: <muchun.song@...ux.dev>, <sidhartha.kumar@...cle.com>,
<vishal.moola@...il.com>, <wangkefeng.wang@...wei.com>,
<sunnanyong@...wei.com>, ZhangPeng <zhangpeng362@...wei.com>
Subject: [PATCH 1/2] mm/hugetlb: Use a folio in copy_hugetlb_page_range()
From: ZhangPeng <zhangpeng362@...wei.com>
We can replace five implict calls to compound_head() with one by using
pte_folio. However, we still need to keep ptepage because we need to know
which page in the folio we are copying.
Suggested-by: Matthew Wilcox (Oracle) <willy@...radead.org>
Signed-off-by: ZhangPeng <zhangpeng362@...wei.com>
---
mm/hugetlb.c | 14 ++++++++------
1 file changed, 8 insertions(+), 6 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index ea24718db4af..0b774dd3d57b 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -5017,6 +5017,7 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
{
pte_t *src_pte, *dst_pte, entry;
struct page *ptepage;
+ struct folio *pte_folio;
unsigned long addr;
bool cow = is_cow_mapping(src_vma->vm_flags);
struct hstate *h = hstate_vma(src_vma);
@@ -5116,7 +5117,8 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
} else {
entry = huge_ptep_get(src_pte);
ptepage = pte_page(entry);
- get_page(ptepage);
+ pte_folio = page_folio(ptepage);
+ folio_get(pte_folio);
/*
* Failing to duplicate the anon rmap is a rare case
@@ -5128,7 +5130,7 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
* need to be without the pgtable locks since we could
* sleep during the process.
*/
- if (!PageAnon(ptepage)) {
+ if (!folio_test_anon(pte_folio)) {
page_dup_file_rmap(ptepage, true);
} else if (page_try_dup_anon_rmap(ptepage, true,
src_vma)) {
@@ -5140,14 +5142,14 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
/* Do not use reserve as it's private owned */
new_folio = alloc_hugetlb_folio(dst_vma, addr, 1);
if (IS_ERR(new_folio)) {
- put_page(ptepage);
+ folio_put(pte_folio);
ret = PTR_ERR(new_folio);
break;
}
ret = copy_user_large_folio(new_folio,
- page_folio(ptepage),
- addr, dst_vma);
- put_page(ptepage);
+ pte_folio,
+ addr, dst_vma);
+ folio_put(pte_folio);
if (ret) {
folio_put(new_folio);
break;
--
2.25.1
Powered by blists - more mailing lists