[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f1e6036c-1966-56dd-0acf-e795671c5ec4@redhat.com>
Date: Tue, 30 Jun 2020 14:52:35 +0200
From: David Hildenbrand <david@...hat.com>
To: Wei Yang <richard.weiyang@...ux.alibaba.com>,
dan.j.williams@...el.com, akpm@...ux-foundation.org
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm/sparse: only sub-section aligned range would be
populated
On 30.06.20 04:14, Wei Yang wrote:
> There are two code path which invoke __populate_section_memmap()
>
> * sparse_init_nid()
> * sparse_add_section()
>
> For both case, we are sure the memory range is sub-section aligned.
>
> * we pass PAGES_PER_SECTION to sparse_init_nid()
> * we check range by check_pfn_span() before calling
> sparse_add_section()
>
> Also, the counterpart of __populate_section_memmap(), we don't do such
> calculation and check since the range is checked by check_pfn_span() in
> __remove_pages().
>
> Clear the calculation and check to keep it simple and comply with its
> counterpart.
>
> Signed-off-by: Wei Yang <richard.weiyang@...ux.alibaba.com>
> ---
> mm/sparse-vmemmap.c | 16 ++--------------
> 1 file changed, 2 insertions(+), 14 deletions(-)
>
> diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
> index 0db7738d76e9..24b01ebae111 100644
> --- a/mm/sparse-vmemmap.c
> +++ b/mm/sparse-vmemmap.c
> @@ -247,20 +247,8 @@ int __meminit vmemmap_populate_basepages(unsigned long start,
> struct page * __meminit __populate_section_memmap(unsigned long pfn,
> unsigned long nr_pages, int nid, struct vmem_altmap *altmap)
> {
> - unsigned long start;
> - unsigned long end;
> -
> - /*
> - * The minimum granularity of memmap extensions is
> - * PAGES_PER_SUBSECTION as allocations are tracked in the
> - * 'subsection_map' bitmap of the section.
> - */
> - end = ALIGN(pfn + nr_pages, PAGES_PER_SUBSECTION);
> - pfn &= PAGE_SUBSECTION_MASK;
> - nr_pages = end - pfn;
> -
> - start = (unsigned long) pfn_to_page(pfn);
> - end = start + nr_pages * sizeof(struct page);
> + unsigned long start = (unsigned long) pfn_to_page(pfn);
> + unsigned long end = start + nr_pages * sizeof(struct page);
>
> if (vmemmap_populate(start, end, nid, altmap))
> return NULL;
>
Can we add a WARN_ON_ONCE to catch mis-use in the future?
if (WARN_ON_ONCE(!IS_ALIGNED(pfn, PAGES_PER_SUBSECTION) ||
!IS_ALIGNED(nr_pages, PAGES_PER_SUBSECTION))
return NULL;
--
Thanks,
David / dhildenb
Powered by blists - more mailing lists