[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240730064712.3714387-8-alexs@kernel.org>
Date: Tue, 30 Jul 2024 14:47:01 +0800
From: alexs@...nel.org
To: Will Deacon <will@...nel.org>,
"Aneesh Kumar K . V" <aneesh.kumar@...nel.org>,
Nick Piggin <npiggin@...il.com>,
Peter Zijlstra <peterz@...radead.org>,
Russell King <linux@...linux.org.uk>,
Catalin Marinas <catalin.marinas@....com>,
Brian Cain <bcain@...cinc.com>,
WANG Xuerui <kernel@...0n.name>,
Geert Uytterhoeven <geert@...ux-m68k.org>,
Jonas Bonn <jonas@...thpole.se>,
Stefan Kristiansson <stefan.kristiansson@...nalahti.fi>,
Stafford Horne <shorne@...il.com>,
Michael Ellerman <mpe@...erman.id.au>,
Naveen N Rao <naveen@...nel.org>,
Paul Walmsley <paul.walmsley@...ive.com>,
Albert Ou <aou@...s.berkeley.edu>,
Thomas Gleixner <tglx@...utronix.de>,
Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>,
x86@...nel.org,
"H . Peter Anvin" <hpa@...or.com>,
Andy Lutomirski <luto@...nel.org>,
Bibo Mao <maobibo@...ngson.cn>,
Baolin Wang <baolin.wang@...ux.alibaba.com>,
linux-arch@...r.kernel.org,
linux-mm@...ck.org,
linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org,
linux-csky@...r.kernel.org,
linux-hexagon@...r.kernel.org,
loongarch@...ts.linux.dev,
linux-m68k@...ts.linux-m68k.org,
linux-openrisc@...r.kernel.org,
linuxppc-dev@...ts.ozlabs.org,
linux-riscv@...ts.infradead.org,
Heiko Carstens <hca@...ux.ibm.com>,
Vasily Gorbik <gor@...ux.ibm.com>,
Christian Borntraeger <borntraeger@...ux.ibm.com>,
Sven Schnelle <svens@...ux.ibm.com>,
Qi Zheng <zhengqi.arch@...edance.com>,
Vishal Moola <vishal.moola@...il.com>,
"Aneesh Kumar K . V" <aneesh.kumar@...ux.ibm.com>,
Kemeng Shi <shikemeng@...weicloud.com>,
Lance Yang <ioworker0@...il.com>,
Peter Xu <peterx@...hat.com>,
Barry Song <baohua@...nel.org>,
linux-s390@...r.kernel.org
Cc: Guo Ren <guoren@...nel.org>,
Christophe Leroy <christophe.leroy@...roup.eu>,
Palmer Dabbelt <palmer@...belt.com>,
Mike Rapoport <rppt@...nel.org>,
Oscar Salvador <osalvador@...e.de>,
Alexandre Ghiti <alexghiti@...osinc.com>,
Jisheng Zhang <jszhang@...nel.org>,
Samuel Holland <samuel.holland@...ive.com>,
Anup Patel <anup@...infault.org>,
Josh Poimboeuf <jpoimboe@...nel.org>,
Breno Leitao <leitao@...ian.org>,
Alexander Gordeev <agordeev@...ux.ibm.com>,
Gerald Schaefer <gerald.schaefer@...ux.ibm.com>,
Hugh Dickins <hughd@...gle.com>,
David Hildenbrand <david@...hat.com>,
Ryan Roberts <ryan.roberts@....com>,
Matthew Wilcox <willy@...radead.org>,
Alex Shi <alexs@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: [RFC PATCH 07/18] mm/thp: use ptdesc in copy_huge_pmd
From: Alex Shi <alexs@...nel.org>
Since we have ptdesc struct now, better to use replace pgtable_t, aka
'struct page *'. It's alaos a preparation for return ptdesc pointer
in pte_alloc_one series function.
Signed-off-by: Alex Shi <alexs@...nel.org>
Cc: linux-kernel@...r.kernel.org
Cc: linux-mm@...ck.org
Cc: Andrew Morton <akpm@...ux-foundation.org>
---
mm/huge_memory.c | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index a331d4504d52..236e1582d97e 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1369,15 +1369,15 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
struct page *src_page;
struct folio *src_folio;
pmd_t pmd;
- pgtable_t pgtable = NULL;
+ struct ptdesc *ptdesc = NULL;
int ret = -ENOMEM;
/* Skip if can be re-fill on fault */
if (!vma_is_anonymous(dst_vma))
return 0;
- pgtable = pte_alloc_one(dst_mm);
- if (unlikely(!pgtable))
+ ptdesc = page_ptdesc(pte_alloc_one(dst_mm));
+ if (unlikely(!ptdesc))
goto out;
dst_ptl = pmd_lock(dst_mm, dst_pmd);
@@ -1404,7 +1404,7 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
}
add_mm_counter(dst_mm, MM_ANONPAGES, HPAGE_PMD_NR);
mm_inc_nr_ptes(dst_mm);
- pgtable_trans_huge_deposit(dst_mm, dst_pmd, pgtable);
+ pgtable_trans_huge_deposit(dst_mm, dst_pmd, ptdesc_page(ptdesc));
if (!userfaultfd_wp(dst_vma))
pmd = pmd_swp_clear_uffd_wp(pmd);
set_pmd_at(dst_mm, addr, dst_pmd, pmd);
@@ -1414,7 +1414,7 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
#endif
if (unlikely(!pmd_trans_huge(pmd))) {
- pte_free(dst_mm, pgtable);
+ pte_free(dst_mm, ptdesc_page(ptdesc));
goto out_unlock;
}
/*
@@ -1440,7 +1440,7 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
if (unlikely(folio_try_dup_anon_rmap_pmd(src_folio, src_page, src_vma))) {
/* Page maybe pinned: split and retry the fault on PTEs. */
folio_put(src_folio);
- pte_free(dst_mm, pgtable);
+ pte_free(dst_mm, ptdesc_page(ptdesc));
spin_unlock(src_ptl);
spin_unlock(dst_ptl);
__split_huge_pmd(src_vma, src_pmd, addr, false, NULL);
@@ -1449,7 +1449,7 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
add_mm_counter(dst_mm, MM_ANONPAGES, HPAGE_PMD_NR);
out_zero_page:
mm_inc_nr_ptes(dst_mm);
- pgtable_trans_huge_deposit(dst_mm, dst_pmd, pgtable);
+ pgtable_trans_huge_deposit(dst_mm, dst_pmd, ptdesc_page(ptdesc));
pmdp_set_wrprotect(src_mm, addr, src_pmd);
if (!userfaultfd_wp(dst_vma))
pmd = pmd_clear_uffd_wp(pmd);
--
2.43.0
Powered by blists - more mailing lists