[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230825190436.55045-12-mike.kravetz@oracle.com>
Date: Fri, 25 Aug 2023 12:04:31 -0700
From: Mike Kravetz <mike.kravetz@...cle.com>
To: linux-mm@...ck.org, linux-kernel@...r.kernel.org
Cc: Muchun Song <songmuchun@...edance.com>,
Joao Martins <joao.m.martins@...cle.com>,
Oscar Salvador <osalvador@...e.de>,
David Hildenbrand <david@...hat.com>,
Miaohe Lin <linmiaohe@...wei.com>,
David Rientjes <rientjes@...gle.com>,
Anshuman Khandual <anshuman.khandual@....com>,
Naoya Horiguchi <naoya.horiguchi@...ux.dev>,
Barry Song <song.bao.hua@...ilicon.com>,
Michal Hocko <mhocko@...e.com>,
Matthew Wilcox <willy@...radead.org>,
Xiongchun Duan <duanxiongchun@...edance.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Mike Kravetz <mike.kravetz@...cle.com>
Subject: [PATCH 11/12] hugetlb: batch TLB flushes when freeing vmemmap
From: Joao Martins <joao.m.martins@...cle.com>
Now that a list of pages is deduplicated at once, the TLB
flush can be batched for all vmemmap pages that got remapped.
Add a flags field and pass whether it's a bulk allocation or
just a single page to decide to remap.
The TLB flush is global as we don't have guarantees from caller
that the set of folios is contiguous, or to add complexity in
composing a list of kVAs to flush.
Modified by Mike Kravetz to perform TLB flush on single folio if an
error is encountered.
Signed-off-by: Joao Martins <joao.m.martins@...cle.com>
Signed-off-by: Mike Kravetz <mike.kravetz@...cle.com>
---
mm/hugetlb_vmemmap.c | 9 +++++++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index 904a64fe5669..a2fc7b03ac6b 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -36,6 +36,7 @@ struct vmemmap_remap_walk {
unsigned long reuse_addr;
struct list_head *vmemmap_pages;
#define VMEMMAP_REMAP_ONLY_SPLIT BIT(0)
+#define VMEMMAP_REMAP_BULK_PAGES BIT(1)
unsigned long flags;
};
@@ -211,7 +212,8 @@ static int vmemmap_remap_range(unsigned long start, unsigned long end,
return ret;
} while (pgd++, addr = next, addr != end);
- if (!(walk->flags & VMEMMAP_REMAP_ONLY_SPLIT))
+ if (!(walk->flags &
+ (VMEMMAP_REMAP_ONLY_SPLIT | VMEMMAP_REMAP_BULK_PAGES)))
flush_tlb_kernel_range(start, end);
return 0;
@@ -377,7 +379,7 @@ static int vmemmap_remap_free(unsigned long start, unsigned long end,
.remap_pte = vmemmap_remap_pte,
.reuse_addr = reuse,
.vmemmap_pages = &vmemmap_pages,
- .flags = 0,
+ .flags = !bulk_pages ? 0 : VMEMMAP_REMAP_BULK_PAGES,
};
int nid = page_to_nid((struct page *)start);
gfp_t gfp_mask = GFP_KERNEL | __GFP_THISNODE | __GFP_NORETRY |
@@ -427,6 +429,7 @@ static int vmemmap_remap_free(unsigned long start, unsigned long end,
.remap_pte = vmemmap_restore_pte,
.reuse_addr = reuse,
.vmemmap_pages = &vmemmap_pages,
+ .flags = 0,
};
vmemmap_remap_range(reuse, end, &walk);
@@ -700,6 +703,8 @@ void hugetlb_vmemmap_optimize_folios(struct hstate *h, struct list_head *folio_l
list_for_each_entry(folio, folio_list, lru)
hugetlb_vmemmap_optimize_bulk(h, &folio->page, &vmemmap_pages);
+ flush_tlb_kernel_range(0, TLB_FLUSH_ALL);
+
free_vmemmap_page_list(&vmemmap_pages);
}
--
2.41.0
Powered by blists - more mailing lists