[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1b9d008a-7544-cc85-5c2f-532b984eb5b5@arm.com>
Date: Wed, 19 May 2021 17:15:03 +0530
From: Anshuman Khandual <anshuman.khandual@....com>
To: Muchun Song <songmuchun@...edance.com>, will@...nel.org,
akpm@...ux-foundation.org, david@...hat.com, bodeddub@...zon.com,
osalvador@...e.de, mike.kravetz@...cle.com, rientjes@...gle.com
Cc: linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, duanxiongchun@...edance.com,
fam.zheng@...edance.com, zhengqi.arch@...edance.com
Subject: Re: [PATCH] arm64: mm: hugetlb: add support for free vmemmap pages of
HugeTLB
On 5/18/21 2:48 PM, Muchun Song wrote:
> The preparation of supporting freeing vmemmap associated with each
> HugeTLB page is ready, so we can support this feature for arm64.
>
> Signed-off-by: Muchun Song <songmuchun@...edance.com>
> ---
> arch/arm64/mm/mmu.c | 5 +++++
> fs/Kconfig | 2 +-
> 2 files changed, 6 insertions(+), 1 deletion(-)
>
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 5d37e461c41f..967b01ce468d 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -23,6 +23,7 @@
> #include <linux/mm.h>
> #include <linux/vmalloc.h>
> #include <linux/set_memory.h>
> +#include <linux/hugetlb.h>
>
> #include <asm/barrier.h>
> #include <asm/cputype.h>
> @@ -1134,6 +1135,10 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
> pmd_t *pmdp;
>
> WARN_ON((start < VMEMMAP_START) || (end > VMEMMAP_END));
> +
> + if (is_hugetlb_free_vmemmap_enabled() && !altmap)
> + return vmemmap_populate_basepages(start, end, node, altmap);
Not considering the fact that this will force the kernel to have only
base page size mapping for vmemmap (unless altmap is also requested)
which might reduce the performance, it also enables vmemmap mapping to
be teared down or build up at runtime which could potentially collide
with other kernel page table walkers like ptdump or memory hotremove
operation ! How those possible collisions are protected right now ?
Does not this vmemmap operation increase latency for HugeTLB usage ?
Should not this runtime enablement also take into account some other
qualifying information apart from potential memory save from struct
page areas. Just wondering.
> +
> do {
> next = pmd_addr_end(addr, end);
>
> diff --git a/fs/Kconfig b/fs/Kconfig
> index 6ce6fdac00a3..02c2d3bf1cb8 100644
> --- a/fs/Kconfig
> +++ b/fs/Kconfig
> @@ -242,7 +242,7 @@ config HUGETLB_PAGE
>
> config HUGETLB_PAGE_FREE_VMEMMAP
> def_bool HUGETLB_PAGE
> - depends on X86_64
> + depends on X86_64 || ARM64
> depends on SPARSEMEM_VMEMMAP
>
> config MEMFD_CREATE
>
Powered by blists - more mailing lists