[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8e298c9f-1ef3-5c99-d7b5-47fd6703cf83@linux.dev>
Date: Wed, 30 Aug 2023 15:26:11 +0800
From: Muchun Song <muchun.song@...ux.dev>
To: Mike Kravetz <mike.kravetz@...cle.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Cc: Muchun Song <songmuchun@...edance.com>,
Joao Martins <joao.m.martins@...cle.com>,
Oscar Salvador <osalvador@...e.de>,
David Hildenbrand <david@...hat.com>,
Miaohe Lin <linmiaohe@...wei.com>,
David Rientjes <rientjes@...gle.com>,
Anshuman Khandual <anshuman.khandual@....com>,
Naoya Horiguchi <naoya.horiguchi@...ux.dev>,
Barry Song <song.bao.hua@...ilicon.com>,
Michal Hocko <mhocko@...e.com>,
Matthew Wilcox <willy@...radead.org>,
Xiongchun Duan <duanxiongchun@...edance.com>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH 09/12] hugetlb_vmemmap: Optimistically set Optimized flag
On 2023/8/26 03:04, Mike Kravetz wrote:
> At the beginning of hugetlb_vmemmap_optimize, optimistically set
> the HPageVmemmapOptimized flag in the head page. Clear the flag
> if the operation fails.
>
> No change in behavior. However, this will become important in
> subsequent patches where we batch delay TLB flushing. We need to
> make sure the content in the old and new vmemmap pages are the same.
Sorry, I didn't get the point here. Could you elaborate it?
>
> Signed-off-by: Mike Kravetz <mike.kravetz@...cle.com>
> ---
> mm/hugetlb_vmemmap.c | 8 +++++---
> 1 file changed, 5 insertions(+), 3 deletions(-)
>
> diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
> index e390170c0887..500a118915ff 100644
> --- a/mm/hugetlb_vmemmap.c
> +++ b/mm/hugetlb_vmemmap.c
> @@ -566,7 +566,9 @@ static void __hugetlb_vmemmap_optimize(const struct hstate *h,
> if (!vmemmap_should_optimize(h, head))
> return;
>
> + /* Optimistically assume success */
> static_branch_inc(&hugetlb_optimize_vmemmap_key);
> + SetHPageVmemmapOptimized(head);
>
> vmemmap_end = vmemmap_start + hugetlb_vmemmap_size(h);
> vmemmap_reuse = vmemmap_start;
> @@ -577,10 +579,10 @@ static void __hugetlb_vmemmap_optimize(const struct hstate *h,
> * to the page which @vmemmap_reuse is mapped to, then free the pages
> * which the range [@vmemmap_start, @vmemmap_end] is mapped to.
> */
> - if (vmemmap_remap_free(vmemmap_start, vmemmap_end, vmemmap_reuse, bulk_pages))
> + if (vmemmap_remap_free(vmemmap_start, vmemmap_end, vmemmap_reuse, bulk_pages)) {
> static_branch_dec(&hugetlb_optimize_vmemmap_key);
> - else
> - SetHPageVmemmapOptimized(head);
> + ClearHPageVmemmapOptimized(head);
> + }
> }
>
> /**
Powered by blists - more mailing lists