lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 26 Feb 2019 11:32:24 -0800
From:   Mike Kravetz <mike.kravetz@...cle.com>
To:     David Rientjes <rientjes@...gle.com>,
        Jing Xiangfeng <jingxiangfeng@...wei.com>,
        Andrew Morton <akpm@...ux-foundation.org>
Cc:     mhocko@...nel.org, hughd@...gle.com, linux-mm@...ck.org,
        n-horiguchi@...jp.nec.com, Andrea Arcangeli <aarcange@...hat.com>,
        kirill.shutemov@...ux.intel.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v4] mm/hugetlb: Fix unsigned overflow in
 __nr_hugepages_store_common()

On 2/25/19 10:21 PM, David Rientjes wrote:
> On Tue, 26 Feb 2019, Jing Xiangfeng wrote:
>> On 2019/2/26 3:17, David Rientjes wrote:
>>> On Mon, 25 Feb 2019, Mike Kravetz wrote:
>>>
>>>> Ok, what about just moving the calculation/check inside the lock as in the
>>>> untested patch below?
>>>>
>>>> Signed-off-by: Mike Kravetz <mike.kravetz@...cle.com>

<snip>

>>>
>>> Looks good; Jing, could you test that this fixes your case?
>>
>> Yes, I have tested this patch, it can also fix my case.
> 
> Great!
> 
> Reported-by: Jing Xiangfeng <jingxiangfeng@...wei.com>
> Tested-by: Jing Xiangfeng <jingxiangfeng@...wei.com>
> Acked-by: David Rientjes <rientjes@...gle.com>

Thanks Jing and David!

Here is the patch with an updated commit message and above tags:

From: Mike Kravetz <mike.kravetz@...cle.com>
Date: Tue, 26 Feb 2019 10:43:24 -0800
Subject: [PATCH] hugetlbfs: fix potential over/underflow setting node specific
nr_hugepages

The number of node specific huge pages can be set via a file such as:
/sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages
When a node specific value is specified, the global number of huge
pages must also be adjusted.  This adjustment is calculated as the
specified node specific value + (global value - current node value).
If the node specific value provided by the user is large enough, this
calculation could overflow an unsigned long leading to a smaller
than expected number of huge pages.

To fix, check the calculation for overflow.  If overflow is detected,
use ULONG_MAX as the requested value.  This is inline with the user
request to allocate as many huge pages as possible.

It was also noticed that the above calculation was done outside the
hugetlb_lock.  Therefore, the values could be inconsistent and result
in underflow.  To fix, the calculation is moved to within the routine
set_max_huge_pages() where the lock is held.

Reported-by: Jing Xiangfeng <jingxiangfeng@...wei.com>
Signed-off-by: Mike Kravetz <mike.kravetz@...cle.com>
Tested-by: Jing Xiangfeng <jingxiangfeng@...wei.com>
Acked-by: David Rientjes <rientjes@...gle.com>
---
 mm/hugetlb.c | 34 ++++++++++++++++++++++++++--------
 1 file changed, 26 insertions(+), 8 deletions(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index b37e3100b7cc..a7e4223d2df5 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2274,7 +2274,7 @@ static int adjust_pool_surplus(struct hstate *h,
nodemask_t *nodes_allowed,
 }

 #define persistent_huge_pages(h) (h->nr_huge_pages - h->surplus_huge_pages)
-static int set_max_huge_pages(struct hstate *h, unsigned long count,
+static int set_max_huge_pages(struct hstate *h, unsigned long count, int nid,
 						nodemask_t *nodes_allowed)
 {
 	unsigned long min_count, ret;
@@ -2289,6 +2289,23 @@ static int set_max_huge_pages(struct hstate *h, unsigned
long count,
 		goto decrease_pool;
 	}

+	spin_lock(&hugetlb_lock);
+
+	/*
+	 * Check for a node specific request.  Adjust global count, but
+	 * restrict alloc/free to the specified node.
+	 */
+	if (nid != NUMA_NO_NODE) {
+		unsigned long old_count = count;
+		count += h->nr_huge_pages - h->nr_huge_pages_node[nid];
+		/*
+		 * If user specified count causes overflow, set to
+		 * largest possible value.
+		 */
+		if (count < old_count)
+			count = ULONG_MAX;
+	}
+
 	/*
 	 * Increase the pool size
 	 * First take pages out of surplus state.  Then make up the
@@ -2300,7 +2317,6 @@ static int set_max_huge_pages(struct hstate *h, unsigned
long count,
 	 * pool might be one hugepage larger than it needs to be, but
 	 * within all the constraints specified by the sysctls.
 	 */
-	spin_lock(&hugetlb_lock);
 	while (h->surplus_huge_pages && count > persistent_huge_pages(h)) {
 		if (!adjust_pool_surplus(h, nodes_allowed, -1))
 			break;
@@ -2421,16 +2437,18 @@ static ssize_t __nr_hugepages_store_common(bool
obey_mempolicy,
 			nodes_allowed = &node_states[N_MEMORY];
 		}
 	} else if (nodes_allowed) {
+		/* Node specific request */
+		init_nodemask_of_node(nodes_allowed, nid);
+	} else {
 		/*
-		 * per node hstate attribute: adjust count to global,
-		 * but restrict alloc/free to the specified node.
+		 * Node specific request, but we could not allocate
+		 * node mask.  Pass in ALL nodes, and clear nid.
 		 */
-		count += h->nr_huge_pages - h->nr_huge_pages_node[nid];
-		init_nodemask_of_node(nodes_allowed, nid);
-	} else
+		nid = NUMA_NO_NODE;
 		nodes_allowed = &node_states[N_MEMORY];
+	}

-	err = set_max_huge_pages(h, count, nodes_allowed);
+	err = set_max_huge_pages(h, count, nid, nodes_allowed);
 	if (err)
 		goto out;

-- 
2.17.2

Powered by blists - more mailing lists