lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 28 Feb 2020 13:26:26 +0000
From:   Wei Yang <richard.weiyang@...il.com>
To:     David Hildenbrand <david@...hat.com>
Cc:     linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        Segher Boessenkool <segher@...nel.crashing.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Oscar Salvador <osalvador@...e.de>,
        Michal Hocko <mhocko@...nel.org>, Baoquan He <bhe@...hat.com>,
        Dan Williams <dan.j.williams@...el.com>,
        Wei Yang <richardw.yang@...ux.intel.com>
Subject: Re: [PATCH v2 2/2] mm/memory_hotplug: cleanup __add_pages()

On Fri, Feb 28, 2020 at 10:58:19AM +0100, David Hildenbrand wrote:
>Let's drop the basically unused section stuff and simplify. The logic
>now matches the logic in __remove_pages().
>
>Cc: Segher Boessenkool <segher@...nel.crashing.org>
>Cc: Andrew Morton <akpm@...ux-foundation.org>
>Cc: Oscar Salvador <osalvador@...e.de>
>Cc: Michal Hocko <mhocko@...nel.org>
>Cc: Baoquan He <bhe@...hat.com>
>Cc: Dan Williams <dan.j.williams@...el.com>
>Cc: Wei Yang <richardw.yang@...ux.intel.com>
>Signed-off-by: David Hildenbrand <david@...hat.com>

Reviewed-by: Wei Yang <richard.weiyang@...il.com>

>---
> mm/memory_hotplug.c | 18 +++++++-----------
> 1 file changed, 7 insertions(+), 11 deletions(-)
>
>diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
>index 8fe7e32dad48..1a00b5a37ef6 100644
>--- a/mm/memory_hotplug.c
>+++ b/mm/memory_hotplug.c
>@@ -307,8 +307,9 @@ static int check_hotplug_memory_addressable(unsigned long pfn,
> int __ref __add_pages(int nid, unsigned long pfn, unsigned long nr_pages,
> 		struct mhp_restrictions *restrictions)
> {
>+	const unsigned long end_pfn = pfn + nr_pages;
>+	unsigned long cur_nr_pages;
> 	int err;
>-	unsigned long nr, start_sec, end_sec;
> 	struct vmem_altmap *altmap = restrictions->altmap;
> 
> 	err = check_hotplug_memory_addressable(pfn, nr_pages);
>@@ -331,18 +332,13 @@ int __ref __add_pages(int nid, unsigned long pfn, unsigned long nr_pages,
> 	if (err)
> 		return err;
> 
>-	start_sec = pfn_to_section_nr(pfn);
>-	end_sec = pfn_to_section_nr(pfn + nr_pages - 1);
>-	for (nr = start_sec; nr <= end_sec; nr++) {
>-		unsigned long pfns;
>-
>-		pfns = min(nr_pages, PAGES_PER_SECTION
>-				- (pfn & ~PAGE_SECTION_MASK));
>-		err = sparse_add_section(nid, pfn, pfns, altmap);
>+	for (; pfn < end_pfn; pfn += cur_nr_pages) {
>+		/* Select all remaining pages up to the next section boundary */
>+		cur_nr_pages = min(end_pfn - pfn,
>+				   SECTION_ALIGN_UP(pfn + 1) - pfn);
>+		err = sparse_add_section(nid, pfn, cur_nr_pages, altmap);
> 		if (err)
> 			break;
>-		pfn += pfns;
>-		nr_pages -= pfns;
> 		cond_resched();
> 	}
> 	vmemmap_populate_print_last();
>-- 
>2.24.1

-- 
Wei Yang
Help you, Help me

Powered by blists - more mailing lists