[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200228132559.lbzci6eiwz52quhn@master>
Date: Fri, 28 Feb 2020 13:25:59 +0000
From: Wei Yang <richard.weiyang@...il.com>
To: David Hildenbrand <david@...hat.com>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Segher Boessenkool <segher@...nel.crashing.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Oscar Salvador <osalvador@...e.de>,
Michal Hocko <mhocko@...nel.org>, Baoquan He <bhe@...hat.com>,
Dan Williams <dan.j.williams@...el.com>
Subject: Re: [PATCH v2 1/2] mm/memory_hotplug: simplify calculation of number
of pages in __remove_pages()
On Fri, Feb 28, 2020 at 10:58:18AM +0100, David Hildenbrand wrote:
>In commit 52fb87c81f11 ("mm/memory_hotplug: cleanup __remove_pages()"),
>we cleaned up __remove_pages(), and introduced a shorter variant to
>calculate the number of pages to the next section boundary.
>
>Turns out we can make this calculation easier to read. We always want to
>have the number of pages (> 0) to the next section boundary, starting from
>the current pfn.
>
>We'll clean up __remove_pages() in a follow-up patch and directly make
>use of this computation.
>
>Suggested-by: Segher Boessenkool <segher@...nel.crashing.org>
>Cc: Andrew Morton <akpm@...ux-foundation.org>
>Cc: Oscar Salvador <osalvador@...e.de>
>Cc: Michal Hocko <mhocko@...nel.org>
>Cc: Baoquan He <bhe@...hat.com>
>Cc: Dan Williams <dan.j.williams@...el.com>
>Cc: Wei Yang <richardw.yang@...ux.intel.com>
>Signed-off-by: David Hildenbrand <david@...hat.com>
Reviewed-by: Wei Yang <richard.weiyang@...il.com>
>---
> mm/memory_hotplug.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
>diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
>index 4a9b3f6c6b37..8fe7e32dad48 100644
>--- a/mm/memory_hotplug.c
>+++ b/mm/memory_hotplug.c
>@@ -534,7 +534,8 @@ void __remove_pages(unsigned long pfn, unsigned long nr_pages,
> for (; pfn < end_pfn; pfn += cur_nr_pages) {
> cond_resched();
> /* Select all remaining pages up to the next section boundary */
>- cur_nr_pages = min(end_pfn - pfn, -(pfn | PAGE_SECTION_MASK));
>+ cur_nr_pages = min(end_pfn - pfn,
>+ SECTION_ALIGN_UP(pfn + 1) - pfn);
> __remove_section(pfn, cur_nr_pages, map_offset, altmap);
> map_offset = 0;
> }
>--
>2.24.1
--
Wei Yang
Help you, Help me
Powered by blists - more mailing lists