[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1c910c9a-d5fd-8eb8-526d-bb1f71833c30@oracle.com>
Date: Thu, 10 Jun 2021 15:35:02 -0700
From: Mike Kravetz <mike.kravetz@...cle.com>
To: Muchun Song <songmuchun@...edance.com>, akpm@...ux-foundation.org,
osalvador@...e.de, mhocko@...e.com, song.bao.hua@...ilicon.com,
david@...hat.com, chenhuang5@...wei.com, bodeddub@...zon.com,
corbet@....net
Cc: duanxiongchun@...edance.com, fam.zheng@...edance.com,
zhengqi.arch@...edance.com, linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH 3/5] mm: sparsemem: split the huge PMD mapping of vmemmap
pages
On 6/9/21 5:13 AM, Muchun Song wrote:
> If the vmemmap is huge PMD mapped, we should split the huge PMD firstly
> and then we can change the PTE page table entry. In this patch, we add
> the ability of splitting the huge PMD mapping of vmemmap pages.
>
> Signed-off-by: Muchun Song <songmuchun@...edance.com>
> ---
> include/linux/mm.h | 2 +-
> mm/hugetlb.c | 42 ++++++++++++++++++++++++++++++++++--
> mm/hugetlb_vmemmap.c | 3 ++-
> mm/sparse-vmemmap.c | 61 +++++++++++++++++++++++++++++++++++++++++++++-------
> 4 files changed, 96 insertions(+), 12 deletions(-)
>
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index cadc8cc2c715..b97e1486c5c1 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -3056,7 +3056,7 @@ static inline void print_vma_addr(char *prefix, unsigned long rip)
> #endif
>
> void vmemmap_remap_free(unsigned long start, unsigned long end,
> - unsigned long reuse);
> + unsigned long reuse, struct list_head *pgtables);
> int vmemmap_remap_alloc(unsigned long start, unsigned long end,
> unsigned long reuse, gfp_t gfp_mask);
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index c3b2a8a494d6..3137c72d9cc7 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -1609,6 +1609,13 @@ static void __prep_account_new_huge_page(struct hstate *h, int nid)
> static void __prep_new_huge_page(struct hstate *h, struct page *page)
> {
> free_huge_page_vmemmap(h, page);
> + /*
> + * Because we store preallocated pages on @page->lru,
> + * vmemmap_pgtable_free() must be called before the
> + * initialization of @page->lru in INIT_LIST_HEAD().
> + */
> + vmemmap_pgtable_free(&page->lru);
> +
> INIT_LIST_HEAD(&page->lru);
> set_compound_page_dtor(page, HUGETLB_PAGE_DTOR);
> hugetlb_set_page_subpool(page, NULL);
> @@ -1775,14 +1782,29 @@ static struct page *alloc_fresh_huge_page(struct hstate *h,
> nodemask_t *node_alloc_noretry)
> {
> struct page *page;
> + LIST_HEAD(pgtables);
> +
> + if (vmemmap_pgtable_prealloc(h, &pgtables))
> + return NULL;
In the previous two patches I asked:
- Can we wait until later to prealloc vmemmap pages for gigantic pages
allocated from bootmem?
- Should we fail to add a hugetlb page to the pool if we can not do
vmemmap optimization?
Depending on the answers to those questions, we may be able to eliminate
these vmemmap_pgtable_prealloc/vmemmap_pgtable_free calls in hugetlb.c.
What about adding the calls to free_huge_page_vmemmap?
At the beginning of free_huge_page_vmemmap, allocate any vmemmap pgtable
pages. If it fails, skip optimization. We can free any pages before
returning to the caller.
Since we also know the page/address in the page table can we check to see
if it is already PTE mapped. If so, can we then skip allocation?
--
Mike Kravetz
Powered by blists - more mailing lists