[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230726110113.GT1901145@kernel.org>
Date: Wed, 26 Jul 2023 14:01:13 +0300
From: Mike Rapoport <rppt@...nel.org>
To: Usama Arif <usama.arif@...edance.com>
Cc: linux-mm@...ck.org, muchun.song@...ux.dev, mike.kravetz@...cle.com,
linux-kernel@...r.kernel.org, fam.zheng@...edance.com,
liangma@...ngbit.com, simon.evans@...edance.com,
punit.agrawal@...edance.com
Subject: Re: [RFC 2/4] mm/memblock: Add hugepage_size member to struct
memblock_region
On Mon, Jul 24, 2023 at 02:46:42PM +0100, Usama Arif wrote:
> This propagates the hugepage size from the memblock APIs
> (memblock_alloc_try_nid_raw and memblock_alloc_range_nid)
> so that it can be stored in struct memblock region. This does not
> introduce any functional change and hugepage_size is not used in
> this commit. It is just a setup for the next commit where huge_pagesize
> is used to skip initialization of struct pages that will be freed later
> when HVO is enabled.
>
> Signed-off-by: Usama Arif <usama.arif@...edance.com>
> ---
> arch/arm64/mm/kasan_init.c | 2 +-
> arch/powerpc/platforms/pasemi/iommu.c | 2 +-
> arch/powerpc/platforms/pseries/setup.c | 4 +-
> arch/powerpc/sysdev/dart_iommu.c | 2 +-
> include/linux/memblock.h | 8 ++-
> mm/cma.c | 4 +-
> mm/hugetlb.c | 6 +-
> mm/memblock.c | 60 ++++++++++++--------
> mm/mm_init.c | 2 +-
> mm/sparse-vmemmap.c | 2 +-
> tools/testing/memblock/tests/alloc_nid_api.c | 2 +-
> 11 files changed, 56 insertions(+), 38 deletions(-)
>
[ snip ]
> diff --git a/include/linux/memblock.h b/include/linux/memblock.h
> index f71ff9f0ec81..bb8019540d73 100644
> --- a/include/linux/memblock.h
> +++ b/include/linux/memblock.h
> @@ -63,6 +63,7 @@ struct memblock_region {
> #ifdef CONFIG_NUMA
> int nid;
> #endif
> + phys_addr_t hugepage_size;
> };
>
> /**
> @@ -400,7 +401,8 @@ phys_addr_t memblock_phys_alloc_range(phys_addr_t size, phys_addr_t align,
> phys_addr_t start, phys_addr_t end);
> phys_addr_t memblock_alloc_range_nid(phys_addr_t size,
> phys_addr_t align, phys_addr_t start,
> - phys_addr_t end, int nid, bool exact_nid);
> + phys_addr_t end, int nid, bool exact_nid,
> + phys_addr_t hugepage_size);
Rather than adding yet another parameter to memblock_phys_alloc_range() we
can have an API that sets a flag on the reserved regions.
With this the hugetlb reservation code can set a flag when HVO is
enabled and memmap_init_reserved_pages() will skip regions with this flag
set.
> phys_addr_t memblock_phys_alloc_try_nid(phys_addr_t size, phys_addr_t align, int nid);
>
> static __always_inline phys_addr_t memblock_phys_alloc(phys_addr_t size,
> @@ -415,7 +417,7 @@ void *memblock_alloc_exact_nid_raw(phys_addr_t size, phys_addr_t align,
> int nid);
> void *memblock_alloc_try_nid_raw(phys_addr_t size, phys_addr_t align,
> phys_addr_t min_addr, phys_addr_t max_addr,
> - int nid);
> + int nid, phys_addr_t hugepage_size);
> void *memblock_alloc_try_nid(phys_addr_t size, phys_addr_t align,
> phys_addr_t min_addr, phys_addr_t max_addr,
> int nid);
> @@ -431,7 +433,7 @@ static inline void *memblock_alloc_raw(phys_addr_t size,
> {
> return memblock_alloc_try_nid_raw(size, align, MEMBLOCK_LOW_LIMIT,
> MEMBLOCK_ALLOC_ACCESSIBLE,
> - NUMA_NO_NODE);
> + NUMA_NO_NODE, 0);
> }
>
> static inline void *memblock_alloc_from(phys_addr_t size,
--
Sincerely yours,
Mike.
Powered by blists - more mailing lists