[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <d6742bae-1b32-10d8-1857-9993a2d06117@zoho.com>
Date: Fri, 30 Sep 2016 00:03:20 +0800
From: zijun_hu <zijun_hu@...o.com>
To: Tejun Heo <tj@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>
Cc: zijun_hu@....com, cl@...ux.com, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: [RFC PATCH 1/1] mm/percpu.c: fix potential memory leakage for
pcpu_embed_first_chunk()
From: zijun_hu <zijun_hu@....com>
it will cause memory leakage for pcpu_embed_first_chunk() to go to
label @out_free if the chunk spans over 3/4 VMALLOC area. all memory
are allocated and recorded into array @areas for each CPU group, but
the memory allocated aren't be freed before returning after going to
label @out_free
in order to fix this bug, we check chunk spanned area immediately
after completing memory allocation for all CPU group, we go to label
@out_free_areas other than @out_free to free all memory allocated if
the checking is failed.
Signed-off-by: zijun_hu <zijun_hu@....com>
---
Hi Andrew,
i am sorry to forget to prefix title with "PATCH" keyword in previous
mail, so i resend it with correction
this patch is based on mmotm/linux-next branch so can be
applied directly
mm/percpu.c | 36 ++++++++++++++++++------------------
1 file changed, 18 insertions(+), 18 deletions(-)
diff --git a/mm/percpu.c b/mm/percpu.c
index 41d9d0b35801..7a5dae185ce1 100644
--- a/mm/percpu.c
+++ b/mm/percpu.c
@@ -1963,7 +1963,7 @@ int __init pcpu_embed_first_chunk(size_t reserved_size, size_t dyn_size,
struct pcpu_alloc_info *ai;
size_t size_sum, areas_size;
unsigned long max_distance;
- int group, i, rc;
+ int group, i, j, rc;
ai = pcpu_build_alloc_info(reserved_size, dyn_size, atom_size,
cpu_distance_fn);
@@ -1979,7 +1979,8 @@ int __init pcpu_embed_first_chunk(size_t reserved_size, size_t dyn_size,
goto out_free;
}
- /* allocate, copy and determine base address */
+ /* allocate, copy and determine base address & max_distance */
+ j = 0;
for (group = 0; group < ai->nr_groups; group++) {
struct pcpu_group_info *gi = &ai->groups[group];
unsigned int cpu = NR_CPUS;
@@ -2000,6 +2001,21 @@ int __init pcpu_embed_first_chunk(size_t reserved_size, size_t dyn_size,
areas[group] = ptr;
base = min(ptr, base);
+ if (ptr > areas[j])
+ j = group;
+ }
+ max_distance = areas[j] - base;
+ max_distance += ai->unit_size * ai->groups[j].nr_units;
+
+ /* warn if maximum distance is further than 75% of vmalloc space */
+ if (max_distance > VMALLOC_TOTAL * 3 / 4) {
+ pr_warn("max_distance=0x%lx too large for vmalloc space 0x%lx\n",
+ max_distance, VMALLOC_TOTAL);
+#ifdef CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK
+ /* and fail if we have fallback */
+ rc = -EINVAL;
+ goto out_free_areas;
+#endif
}
/*
@@ -2024,24 +2040,8 @@ int __init pcpu_embed_first_chunk(size_t reserved_size, size_t dyn_size,
}
/* base address is now known, determine group base offsets */
- i = 0;
for (group = 0; group < ai->nr_groups; group++) {
ai->groups[group].base_offset = areas[group] - base;
- if (areas[group] > areas[i])
- i = group;
- }
- max_distance = ai->groups[i].base_offset +
- (unsigned long)ai->unit_size * ai->groups[i].nr_units;
-
- /* warn if maximum distance is further than 75% of vmalloc space */
- if (max_distance > VMALLOC_TOTAL * 3 / 4) {
- pr_warn("max_distance=0x%lx too large for vmalloc space 0x%lx\n",
- max_distance, VMALLOC_TOTAL);
-#ifdef CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK
- /* and fail if we have fallback */
- rc = -EINVAL;
- goto out_free;
-#endif
}
pr_info("Embedded %zu pages/cpu @%p s%zu r%zu d%zu u%zu\n",
--
1.9.1
Powered by blists - more mailing lists