[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200805214955.ds7y3nwjoz2ms37h@master>
Date: Wed, 5 Aug 2020 21:49:55 +0000
From: Wei Yang <richard.weiyang@...il.com>
To: Wei Yang <richard.weiyang@...ux.alibaba.com>
Cc: akpm@...ux-foundation.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, david@...hat.com
Subject: Re: [Patch v2] mm/sparse: only sub-section aligned range would be
populated
On Fri, Jul 03, 2020 at 11:18:28AM +0800, Wei Yang wrote:
>There are two code path which invoke __populate_section_memmap()
>
> * sparse_init_nid()
> * sparse_add_section()
>
>For both case, we are sure the memory range is sub-section aligned.
>
> * we pass PAGES_PER_SECTION to sparse_init_nid()
> * we check range by check_pfn_span() before calling
> sparse_add_section()
>
>Also, the counterpart of __populate_section_memmap(), we don't do such
>calculation and check since the range is checked by check_pfn_span() in
>__remove_pages().
>
>Clear the calculation and check to keep it simple and comply with its
>counterpart.
>
>Signed-off-by: Wei Yang <richard.weiyang@...ux.alibaba.com>
>
Hi, Andrew,
Is this one picked up?
>---
>v2:
> * add a warn on once for unaligned range, suggested by David
>---
> mm/sparse-vmemmap.c | 20 ++++++--------------
> 1 file changed, 6 insertions(+), 14 deletions(-)
>
>diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
>index 0db7738d76e9..8d3a1b6287c5 100644
>--- a/mm/sparse-vmemmap.c
>+++ b/mm/sparse-vmemmap.c
>@@ -247,20 +247,12 @@ int __meminit vmemmap_populate_basepages(unsigned long start,
> struct page * __meminit __populate_section_memmap(unsigned long pfn,
> unsigned long nr_pages, int nid, struct vmem_altmap *altmap)
> {
>- unsigned long start;
>- unsigned long end;
>-
>- /*
>- * The minimum granularity of memmap extensions is
>- * PAGES_PER_SUBSECTION as allocations are tracked in the
>- * 'subsection_map' bitmap of the section.
>- */
>- end = ALIGN(pfn + nr_pages, PAGES_PER_SUBSECTION);
>- pfn &= PAGE_SUBSECTION_MASK;
>- nr_pages = end - pfn;
>-
>- start = (unsigned long) pfn_to_page(pfn);
>- end = start + nr_pages * sizeof(struct page);
>+ unsigned long start = (unsigned long) pfn_to_page(pfn);
>+ unsigned long end = start + nr_pages * sizeof(struct page);
>+
>+ if (WARN_ON_ONCE(!IS_ALIGNED(pfn, PAGES_PER_SUBSECTION) ||
>+ !IS_ALIGNED(nr_pages, PAGES_PER_SUBSECTION)))
>+ return NULL;
>
> if (vmemmap_populate(start, end, nid, altmap))
> return NULL;
>--
>2.20.1 (Apple Git-117)
--
Wei Yang
Help you, Help me
Powered by blists - more mailing lists