[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.21.1902241913000.34632@chino.kir.corp.google.com>
Date: Sun, 24 Feb 2019 19:17:45 -0800 (PST)
From: David Rientjes <rientjes@...gle.com>
To: Mike Kravetz <mike.kravetz@...cle.com>
cc: Jing Xiangfeng <jingxiangfeng@...wei.com>, mhocko@...nel.org,
akpm@...ux-foundation.org, hughd@...gle.com, linux-mm@...ck.org,
n-horiguchi@...jp.nec.com, aarcange@...hat.com,
kirill.shutemov@...ux.intel.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v4] mm/hugetlb: Fix unsigned overflow in
__nr_hugepages_store_common()
On Sun, 24 Feb 2019, Mike Kravetz wrote:
> > User can change a node specific hugetlb count. i.e.
> > /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages
> > the calculated value of count is a total number of huge pages. It could
> > be overflow when a user entering a crazy high value. If so, the total
> > number of huge pages could be a small value which is not user expect.
> > We can simply fix it by setting count to ULONG_MAX, then it goes on. This
> > may be more in line with user's intention of allocating as many huge pages
> > as possible.
> >
> > Signed-off-by: Jing Xiangfeng <jingxiangfeng@...wei.com>
>
> Thank you.
>
> Acked-by: Mike Kravetz <mike.kravetz@...cle.com>
>
> > ---
> > mm/hugetlb.c | 7 +++++++
> > 1 file changed, 7 insertions(+)
> >
> > diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> > index afef616..6688894 100644
> > --- a/mm/hugetlb.c
> > +++ b/mm/hugetlb.c
> > @@ -2423,7 +2423,14 @@ static ssize_t __nr_hugepages_store_common(bool obey_mempolicy,
> > * per node hstate attribute: adjust count to global,
> > * but restrict alloc/free to the specified node.
> > */
> > + unsigned long old_count = count;
> > count += h->nr_huge_pages - h->nr_huge_pages_node[nid];
> > + /*
> > + * If user specified count causes overflow, set to
> > + * largest possible value.
> > + */
> > + if (count < old_count)
> > + count = ULONG_MAX;
> > init_nodemask_of_node(nodes_allowed, nid);
> > } else
> > nodes_allowed = &node_states[N_MEMORY];
> >
Looks like this fixes the overflow issue, but isn't there already a
possible underflow since we don't hold hugetlb_lock? Even if
count == 0, what prevents h->nr_huge_pages_node[nid] being greater than
h->nr_huge_pages here? I think the per hstate values need to be read with
READ_ONCE() and stored on the stack to do any sane bounds checking.
Powered by blists - more mailing lists