[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a856a6bb-d27a-216e-dd45-e1bc0d040702@linux.dev>
Date: Wed, 30 Aug 2023 16:47:18 +0800
From: Muchun Song <muchun.song@...ux.dev>
To: Mike Kravetz <mike.kravetz@...cle.com>
Cc: Muchun Song <songmuchun@...edance.com>,
Joao Martins <joao.m.martins@...cle.com>,
Oscar Salvador <osalvador@...e.de>,
David Hildenbrand <david@...hat.com>,
Miaohe Lin <linmiaohe@...wei.com>,
David Rientjes <rientjes@...gle.com>,
Anshuman Khandual <anshuman.khandual@....com>,
Naoya Horiguchi <naoya.horiguchi@...ux.dev>,
Barry Song <song.bao.hua@...ilicon.com>,
Michal Hocko <mhocko@...e.com>,
Matthew Wilcox <willy@...radead.org>,
Xiongchun Duan <duanxiongchun@...edance.com>,
Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 12/12] hugetlb: batch TLB flushes when restoring vmemmap
On 2023/8/26 03:04, Mike Kravetz wrote:
> Update the hugetlb_vmemmap_restore path to take a 'batch' parameter that
> indicates restoration is happening on a batch of pages. When set, use
> the existing mechanism (VMEMMAP_REMAP_BULK_PAGES) to delay TLB flushing.
> The routine hugetlb_vmemmap_restore_folios is the only user of this new
> batch parameter and it will perform a global flush after all vmemmap is
> restored.
>
> Signed-off-by: Mike Kravetz <mike.kravetz@...cle.com>
> ---
> mm/hugetlb_vmemmap.c | 37 +++++++++++++++++++++++--------------
> 1 file changed, 23 insertions(+), 14 deletions(-)
>
> diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
> index a2fc7b03ac6b..d6e7440b9507 100644
> --- a/mm/hugetlb_vmemmap.c
> +++ b/mm/hugetlb_vmemmap.c
> @@ -479,17 +479,19 @@ static int alloc_vmemmap_page_list(unsigned long start, unsigned long end,
> * @end: end address of the vmemmap virtual address range that we want to
> * remap.
> * @reuse: reuse address.
> + * @bulk: bulk operation, batch TLB flushes
> *
> * Return: %0 on success, negative error code otherwise.
> */
> static int vmemmap_remap_alloc(unsigned long start, unsigned long end,
> - unsigned long reuse)
> + unsigned long reuse, bool bulk)
I'd like to let vmemmap_remap_alloc pass VMEMMAP_REMAP_BULK_PAGES directly,
in which case, we do not need to change this function if we want to
introduce
another flag in the future. I mean that change "bool bulk" to "unsigned
long flags".
> {
> LIST_HEAD(vmemmap_pages);
> struct vmemmap_remap_walk walk = {
> .remap_pte = vmemmap_restore_pte,
> .reuse_addr = reuse,
> .vmemmap_pages = &vmemmap_pages,
> + .flags = !bulk ? 0 : VMEMMAP_REMAP_BULK_PAGES,
> };
>
> /* See the comment in the vmemmap_remap_free(). */
> @@ -511,17 +513,7 @@ EXPORT_SYMBOL(hugetlb_optimize_vmemmap_key);
> static bool vmemmap_optimize_enabled = IS_ENABLED(CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON);
> core_param(hugetlb_free_vmemmap, vmemmap_optimize_enabled, bool, 0);
>
> -/**
> - * hugetlb_vmemmap_restore - restore previously optimized (by
> - * hugetlb_vmemmap_optimize()) vmemmap pages which
> - * will be reallocated and remapped.
> - * @h: struct hstate.
> - * @head: the head page whose vmemmap pages will be restored.
> - *
> - * Return: %0 if @head's vmemmap pages have been reallocated and remapped,
> - * negative error code otherwise.
> - */
> -int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head)
> +int __hugetlb_vmemmap_restore(const struct hstate *h, struct page *head, bool bulk)
The same as here.
> {
> int ret;
> unsigned long vmemmap_start = (unsigned long)head, vmemmap_end;
> @@ -541,7 +533,7 @@ int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head)
> * When a HugeTLB page is freed to the buddy allocator, previously
> * discarded vmemmap pages must be allocated and remapping.
> */
> - ret = vmemmap_remap_alloc(vmemmap_start, vmemmap_end, vmemmap_reuse);
> + ret = vmemmap_remap_alloc(vmemmap_start, vmemmap_end, vmemmap_reuse, bulk);
> if (!ret) {
> ClearHPageVmemmapOptimized(head);
> static_branch_dec(&hugetlb_optimize_vmemmap_key);
> @@ -550,12 +542,29 @@ int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head)
> return ret;
> }
>
> +/**
> + * hugetlb_vmemmap_restore - restore previously optimized (by
> + * hugetlb_vmemmap_optimize()) vmemmap pages which
> + * will be reallocated and remapped.
> + * @h: struct hstate.
> + * @head: the head page whose vmemmap pages will be restored.
> + *
> + * Return: %0 if @head's vmemmap pages have been reallocated and remapped,
> + * negative error code otherwise.
> + */
> +int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head)
> +{
> + return __hugetlb_vmemmap_restore(h, head, false);
> +}
> +
> void hugetlb_vmemmap_restore_folios(const struct hstate *h, struct list_head *folio_list)
> {
> struct folio *folio;
>
> list_for_each_entry(folio, folio_list, lru)
> - hugetlb_vmemmap_restore(h, &folio->page);
> + (void)__hugetlb_vmemmap_restore(h, &folio->page, true);
Pass VMEMMAP_REMAP_BULK_PAGES directly here.
Thanks.
> +
> + flush_tlb_kernel_range(0, TLB_FLUSH_ALL);
> }
>
> /* Return true iff a HugeTLB whose vmemmap should and can be optimized. */
Powered by blists - more mailing lists