[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220622170627.19786-2-linmiaohe@huawei.com>
Date: Thu, 23 Jun 2022 01:06:12 +0800
From: Miaohe Lin <linmiaohe@...wei.com>
To: <akpm@...ux-foundation.org>
CC: <shy828301@...il.com>, <willy@...radead.org>, <zokeefe@...gle.com>,
<linux-mm@...ck.org>, <linux-kernel@...r.kernel.org>,
<linmiaohe@...wei.com>
Subject: [PATCH 01/16] mm/huge_memory: use flush_pmd_tlb_range in move_huge_pmd
ARCHes with special requirements for evicting THP backing TLB entries can
implement flush_pmd_tlb_range. Otherwise also, it can help optimize TLB
flush in THP regime. Using flush_pmd_tlb_range to take advantage of this
in move_huge_pmd.
Signed-off-by: Miaohe Lin <linmiaohe@...wei.com>
---
mm/huge_memory.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index af0751a79c19..fd6da053a13e 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1746,7 +1746,7 @@ bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr,
pmd = move_soft_dirty_pmd(pmd);
set_pmd_at(mm, new_addr, new_pmd, pmd);
if (force_flush)
- flush_tlb_range(vma, old_addr, old_addr + PMD_SIZE);
+ flush_pmd_tlb_range(vma, old_addr, old_addr + PMD_SIZE);
if (new_ptl != old_ptl)
spin_unlock(new_ptl);
spin_unlock(old_ptl);
--
2.23.0
Powered by blists - more mailing lists