lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 28 Dec 2017 17:05:44 +0000
From:   Ben Hutchings <ben@...adent.org.uk>
To:     linux-kernel@...r.kernel.org, stable@...r.kernel.org
CC:     akpm@...ux-foundation.org, "Reza Arbab" <arbab@...ux.vnet.ibm.com>,
        "Yasuaki Ishimatsu" <isimatu.yasuaki@...fujitsu.com>,
        "Michal Hocko" <mhocko@...e.com>,
        "YASUAKI ISHIMATSU" <yasu.isimatu@...il.com>,
        "Xishi Qiu" <qiuxishi@...wei.com>,
        "Vlastimil Babka" <vbabka@...e.cz>,
        "Linus Torvalds" <torvalds@...ux-foundation.org>
Subject: [PATCH 3.16 089/204] mm/memory_hotplug: change pfn_to_section_nr/section_nr_to_pfn
 macro to inline function

3.16.52-rc1 review patch.  If anyone has any objections, please let me know.

------------------

From: YASUAKI ISHIMATSU <yasu.isimatu@...il.com>

commit 1dd2bfc86818ddbc95f98e312e7704350223fd7d upstream.

pfn_to_section_nr() and section_nr_to_pfn() are defined as macro.
pfn_to_section_nr() has no issue even if it is defined as macro.  But
section_nr_to_pfn() has overflow issue if sec is defined as int.

section_nr_to_pfn() just shifts sec by PFN_SECTION_SHIFT.  If sec is
defined as unsigned long, section_nr_to_pfn() returns pfn as 64 bit value.
But if sec is defined as int, section_nr_to_pfn() returns pfn as 32 bit
value.

__remove_section() calculates start_pfn using section_nr_to_pfn() and
scn_nr defined as int.  So if hot-removed memory address is over 16TB,
overflow issue occurs and section_nr_to_pfn() does not calculate correct
pfn.

To make callers use proper arg, the patch changes the macros to inline
functions.

Fixes: 815121d2b5cd ("memory_hotplug: clear zone when removing the memory")
Link: http://lkml.kernel.org/r/e643a387-e573-6bbf-d418-c60c8ee3d15e@gmail.com
Signed-off-by: Yasuaki Ishimatsu <isimatu.yasuaki@...fujitsu.com>
Acked-by: Michal Hocko <mhocko@...e.com>
Cc: Xishi Qiu <qiuxishi@...wei.com>
Cc: Reza Arbab <arbab@...ux.vnet.ibm.com>
Cc: Vlastimil Babka <vbabka@...e.cz>
Signed-off-by: Andrew Morton <akpm@...ux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@...ux-foundation.org>
Signed-off-by: Ben Hutchings <ben@...adent.org.uk>
---
 include/linux/mmzone.h | 10 ++++++++--
 mm/memory_hotplug.c    |  2 +-
 2 files changed, 9 insertions(+), 3 deletions(-)

--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -1094,8 +1094,14 @@ static inline unsigned long early_pfn_to
 #error Allocator MAX_ORDER exceeds SECTION_SIZE
 #endif
 
-#define pfn_to_section_nr(pfn) ((pfn) >> PFN_SECTION_SHIFT)
-#define section_nr_to_pfn(sec) ((sec) << PFN_SECTION_SHIFT)
+static inline unsigned long pfn_to_section_nr(unsigned long pfn)
+{
+	return pfn >> PFN_SECTION_SHIFT;
+}
+static inline unsigned long section_nr_to_pfn(unsigned long sec)
+{
+	return sec << PFN_SECTION_SHIFT;
+}
 
 #define SECTION_ALIGN_UP(pfn)	(((pfn) + PAGES_PER_SECTION - 1) & PAGE_SECTION_MASK)
 #define SECTION_ALIGN_DOWN(pfn)	((pfn) & PAGE_SECTION_MASK)
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -735,7 +735,7 @@ static int __remove_section(struct zone
 		return ret;
 
 	scn_nr = __section_nr(ms);
-	start_pfn = section_nr_to_pfn(scn_nr);
+	start_pfn = section_nr_to_pfn((unsigned long)scn_nr);
 	__remove_zone(zone, start_pfn);
 
 	sparse_remove_one_section(zone, ms);

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ