lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20170127222149.30893-2-toshi.kani@hpe.com>
Date:   Fri, 27 Jan 2017 15:21:48 -0700
From:   Toshi Kani <toshi.kani@....com>
To:     akpm@...ux-foundation.org, gregkh@...uxfoundation.org
Cc:     linux-mm@...ck.org, zhenzhang.zhang@...wei.com,
        arbab@...ux.vnet.ibm.com, dan.j.williams@...el.com,
        abanman@....com, rientjes@...gle.com, linux-kernel@...r.kernel.org,
        stable@...r.kernel.org, toshi.kani@....com
Subject: [PATCH v2 1/2] mm/memory_hotplug.c: check start_pfn in test_pages_in_a_zone()

test_pages_in_a_zone() does not check 'start_pfn' when it is
aligned by section since 'sec_end_pfn' is set equal to 'pfn'.
Since this function is called for testing the range of a sysfs
memory file, 'start_pfn' is always aligned by section.

Fix it by properly setting 'sec_end_pfn' to the next section pfn.

Also make sure that this function returns 1 only when the range
belongs to a zone.

Signed-off-by: Toshi Kani <toshi.kani@....com>
Cc: Andrew Morton <akpm@...ux-foundation.org>
Cc: Andrew Banman <abanman@....com>
Cc: Reza Arbab <arbab@...ux.vnet.ibm.com>
Cc: <stable@...r.kernel.org> # v4.4+
---
 mm/memory_hotplug.c |   12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 3e3db7a..c845c5f 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1489,7 +1489,7 @@ bool is_mem_section_removable(unsigned long start_pfn, unsigned long nr_pages)
 }
 
 /*
- * Confirm all pages in a range [start, end) is belongs to the same zone.
+ * Confirm all pages in a range [start, end) belong to the same zone.
  */
 int test_pages_in_a_zone(unsigned long start_pfn, unsigned long end_pfn)
 {
@@ -1497,9 +1497,9 @@ int test_pages_in_a_zone(unsigned long start_pfn, unsigned long end_pfn)
 	struct zone *zone = NULL;
 	struct page *page;
 	int i;
-	for (pfn = start_pfn, sec_end_pfn = SECTION_ALIGN_UP(start_pfn);
+	for (pfn = start_pfn, sec_end_pfn = SECTION_ALIGN_UP(start_pfn + 1);
 	     pfn < end_pfn;
-	     pfn = sec_end_pfn + 1, sec_end_pfn += PAGES_PER_SECTION) {
+	     pfn = sec_end_pfn, sec_end_pfn += PAGES_PER_SECTION) {
 		/* Make sure the memory section is present first */
 		if (!present_section_nr(pfn_to_section_nr(pfn)))
 			continue;
@@ -1518,7 +1518,11 @@ int test_pages_in_a_zone(unsigned long start_pfn, unsigned long end_pfn)
 			zone = page_zone(page);
 		}
 	}
-	return 1;
+
+	if (zone)
+		return 1;
+	else
+		return 0;
 }
 
 /*

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ