lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230407124054.27iiers6o36pdfei@box.shutemov.name>
Date:   Fri, 7 Apr 2023 15:40:54 +0300
From:   "Kirill A. Shutemov" <kirill@...temov.name>
To:     Andrew Morton <akpm@...ux-foundation.org>
Cc:     Mike Rapoport <rppt@...nel.org>,
        Guenter Roeck <linux@...ck-us.net>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm-treewide-redefine-max_order-sanely-fix.txt

On Thu, Apr 06, 2023 at 03:44:23PM -0700, Andrew Morton wrote:
> On Fri, 7 Apr 2023 00:14:31 +0300 Mike Rapoport <rppt@...nel.org> wrote:
> 
> > > > Shouldn't that be
> > > > 		else
> > > > 			order = 0;
> > > > ?
> > > 
> > > +Mike.
> > > 
> > > No. start == 0 is MAX_ORDER-aligned. We want to free the pages in the
> > > largest chunks alignment allows.
> > 
> > Right. Before the changes to MAX_ORDER it was
> > 
> > 		order = min(MAX_ORDER - 1UL, __ffs(start));
> > 
> > which would evaluate to 10.
> > 
> > I'd just prefer the comment to include the explanation about why we choose
> > MAX_ORDER for start == 0. Say
> > 
> > 	/*
> > 	 * __ffs() behaviour is undefined for 0 and we want to free the
> > 	 * pages in the largest chunks alignment allows, so set order to
> > 	 * MAX_ORDER when start == 0
> > 	 */
> 
> Meanwhile I'd like to fix "various boot failures (hang) on arm targets"
> in -next, so I queued up Kirill's informal fix for now.

Here's my variant of the fix up with more vervose comments.

diff --git a/mm/memblock.c b/mm/memblock.c
index 7911224b1ed3..381e36ac9e4d 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -2043,7 +2043,16 @@ static void __init __free_pages_memory(unsigned long start, unsigned long end)
 	int order;
 
 	while (start < end) {
-		order = min_t(int, MAX_ORDER, __ffs(start));
+		/*
+		 * Free the pages in the largest chunks alignment allows.
+		 *
+		 * __ffs() behaviour is undefined for 0. start == 0 is
+		 * MAX_ORDER-aligned, Set order to MAX_ORDER for the case.
+		 */
+		if (start)
+			order = min_t(int, MAX_ORDER, __ffs(start));
+		else
+			order = MAX_ORDER;
 
 		while (start + (1UL << order) > end)
 			order--;
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index c8f0a8c2d049..8e0fa209d533 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -605,7 +605,18 @@ static void online_pages_range(unsigned long start_pfn, unsigned long nr_pages)
 	 * this and the first chunk to online will be pageblock_nr_pages.
 	 */
 	for (pfn = start_pfn; pfn < end_pfn;) {
-		int order = min_t(int, MAX_ORDER, __ffs(pfn));
+		int order;
+
+		/*
+		 * Free to online pages in the largest chunks alignment allows.
+		 *
+		 * __ffs() behaviour is undefined for 0. start == 0 is
+		 * MAX_ORDER-aligned, Set order to MAX_ORDER for the case.
+		 */
+		if (pfn)
+			order = min_t(int, MAX_ORDER, __ffs(pfn));
+		else
+			order = MAX_ORDER;
 
 		(*online_page_callback)(pfn_to_page(pfn), order);
 		pfn += (1UL << order);
-- 
  Kiryl Shutsemau / Kirill A. Shutemov

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ