[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.21.1902251116180.167839@chino.kir.corp.google.com>
Date: Mon, 25 Feb 2019 11:17:14 -0800 (PST)
From: David Rientjes <rientjes@...gle.com>
To: Mike Kravetz <mike.kravetz@...cle.com>
cc: Jing Xiangfeng <jingxiangfeng@...wei.com>, mhocko@...nel.org,
akpm@...ux-foundation.org, hughd@...gle.com, linux-mm@...ck.org,
n-horiguchi@...jp.nec.com, aarcange@...hat.com,
kirill.shutemov@...ux.intel.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v4] mm/hugetlb: Fix unsigned overflow in
__nr_hugepages_store_common()
On Mon, 25 Feb 2019, Mike Kravetz wrote:
> Ok, what about just moving the calculation/check inside the lock as in the
> untested patch below?
>
> Signed-off-by: Mike Kravetz <mike.kravetz@...cle.com>
> ---
> mm/hugetlb.c | 34 ++++++++++++++++++++++++++--------
> 1 file changed, 26 insertions(+), 8 deletions(-)
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 1c5219193b9e..5afa77dc7bc8 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -2274,7 +2274,7 @@ static int adjust_pool_surplus(struct hstate *h,
> nodemask_t *nodes_allowed,
> }
>
> #define persistent_huge_pages(h) (h->nr_huge_pages - h->surplus_huge_pages)
> -static int set_max_huge_pages(struct hstate *h, unsigned long count,
> +static int set_max_huge_pages(struct hstate *h, unsigned long count, int nid,
> nodemask_t *nodes_allowed)
> {
> unsigned long min_count, ret;
> @@ -2289,6 +2289,23 @@ static int set_max_huge_pages(struct hstate *h, unsigned
> long count,
> goto decrease_pool;
> }
>
> + spin_lock(&hugetlb_lock);
> +
> + /*
> + * Check for a node specific request. Adjust global count, but
> + * restrict alloc/free to the specified node.
> + */
> + if (nid != NUMA_NO_NODE) {
> + unsigned long old_count = count;
> + count += h->nr_huge_pages - h->nr_huge_pages_node[nid];
> + /*
> + * If user specified count causes overflow, set to
> + * largest possible value.
> + */
> + if (count < old_count)
> + count = ULONG_MAX;
> + }
> +
> /*
> * Increase the pool size
> * First take pages out of surplus state. Then make up the
> @@ -2300,7 +2317,6 @@ static int set_max_huge_pages(struct hstate *h, unsigned
> long count,
> * pool might be one hugepage larger than it needs to be, but
> * within all the constraints specified by the sysctls.
> */
> - spin_lock(&hugetlb_lock);
> while (h->surplus_huge_pages && count > persistent_huge_pages(h)) {
> if (!adjust_pool_surplus(h, nodes_allowed, -1))
> break;
> @@ -2421,16 +2437,18 @@ static ssize_t __nr_hugepages_store_common(bool
> obey_mempolicy,
> nodes_allowed = &node_states[N_MEMORY];
> }
> } else if (nodes_allowed) {
> + /* Node specific request */
> + init_nodemask_of_node(nodes_allowed, nid);
> + } else {
> /*
> - * per node hstate attribute: adjust count to global,
> - * but restrict alloc/free to the specified node.
> + * Node specific request, but we could not allocate
> + * node mask. Pass in ALL nodes, and clear nid.
> */
> - count += h->nr_huge_pages - h->nr_huge_pages_node[nid];
> - init_nodemask_of_node(nodes_allowed, nid);
> - } else
> + nid = NUMA_NO_NODE;
> nodes_allowed = &node_states[N_MEMORY];
> + }
>
> - err = set_max_huge_pages(h, count, nodes_allowed);
> + err = set_max_huge_pages(h, count, nid, nodes_allowed);
> if (err)
> goto out;
>
Looks good; Jing, could you test that this fixes your case?
Powered by blists - more mailing lists