[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <890783047.150265.1294296840281.JavaMail.root@zmail06.collab.prod.int.phx2.redhat.com>
Date: Thu, 6 Jan 2011 01:54:00 -0500 (EST)
From: CAI Qian <caiqian@...hat.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org, mel@....ul.ie,
mhocko@...e.cz, Eric B Munson <emunson@...bm.net>
Subject: Re: [PATCH V2] Do not allow pagesize >= MAX_ORDER pool adjustment
----- Original Message -----
> On Wed, 5 Jan 2011 13:29:57 -0700
> Eric B Munson <emunson@...bm.net> wrote:
>
> > Huge pages with order >= MAX_ORDER must be allocated at boot via
> > the kernel command line, they cannot be allocated or freed once
> > the kernel is up and running. Currently we allow values to be
> > written to the sysfs and sysctl files controling pool size for these
> > huge page sizes. This patch makes the store functions for
> > nr_hugepages
> > and nr_overcommit_hugepages return -EINVAL when the pool for a
> > page size >= MAX_ORDER is changed.
> >
>
> gack, you people keep on making me look at the hugetlb code :(
>
> > index 5cb71a9..15bd633 100644
> > --- a/mm/hugetlb.c
> > +++ b/mm/hugetlb.c
> > @@ -1443,6 +1443,12 @@ static ssize_t nr_hugepages_store_common(bool
> > obey_mempolicy,
> > return -EINVAL;
>
> Why do these functions do a `return 0' if strict_strtoul() failed?
>
> >
> > h = kobj_to_hstate(kobj, &nid);
> > +
> > + if (h->order >= MAX_ORDER) {
> > + NODEMASK_FREE(nodes_allowed);
> > + return -EINVAL;
> > + }
>
> Let's avoid having multiple unwind-and-return paths in a function,
> please. it often leads to resource leaks and locking errors as the
> code evolves.
>
> ---
> a/mm/hugetlb.c~hugetlb-do-not-allow-pagesize-=-max_order-pool-adjustment-fix
> +++ a/mm/hugetlb.c
> @@ -1363,6 +1363,7 @@ static ssize_t nr_hugepages_show_common(
>
> return sprintf(buf, "%lu\n", nr_huge_pages);
> }
> +
> static ssize_t nr_hugepages_store_common(bool obey_mempolicy,
> struct kobject *kobj, struct kobj_attribute *attr,
> const char *buf, size_t len)
> @@ -1375,15 +1376,14 @@ static ssize_t nr_hugepages_store_common
>
> err = strict_strtoul(buf, 10, &count);
> if (err) {
> - NODEMASK_FREE(nodes_allowed);
> - return 0;
> + err = 0; /* This seems wrong */
> + goto out;
> }
>
> h = kobj_to_hstate(kobj, &nid);
> -
> if (h->order >= MAX_ORDER) {
> - NODEMASK_FREE(nodes_allowed);
> - return -EINVAL;
> + err = -EINVAL;
> + goto out;
> }
>
> if (nid == NUMA_NO_NODE) {
> @@ -1411,6 +1411,9 @@ static ssize_t nr_hugepages_store_common
> NODEMASK_FREE(nodes_allowed);
>
> return len;
> +out:
> + NODEMASK_FREE(nodes_allowed);
> + return err;
> }
>
> static ssize_t nr_hugepages_show(struct kobject *kobj,
> _
As I mentioned in another thread. This is missing checking in
hugetlb_overcommit_handler().
CAI Qian
-------------
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index c4a3558..60740bd 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1510,6 +1510,7 @@ static ssize_t nr_overcommit_hugepages_show(struct kobject *kobj,
struct hstate *h = kobj_to_hstate(kobj, NULL);
return sprintf(buf, "%lu\n", h->nr_overcommit_huge_pages);
}
+
static ssize_t nr_overcommit_hugepages_store(struct kobject *kobj,
struct kobj_attribute *attr, const char *buf, size_t count)
{
@@ -1986,6 +1987,9 @@ int hugetlb_overcommit_handler(struct ctl_table *table, int write,
if (!write)
tmp = h->nr_overcommit_huge_pages;
+ if (write && h->order >= MAX_ORDER)
+ return -EINVAL;
+
table->data = &tmp;
table->maxlen = sizeof(unsigned long);
proc_doulongvec_minmax(table, write, buffer, length, ppos);
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists