[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <20091016093252.30d78e4b.nishimura@mxp.nes.nec.co.jp>
Date: Fri, 16 Oct 2009 09:32:52 +0900
From: Daisuke Nishimura <nishimura@....nes.nec.co.jp>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"balbir@...ux.vnet.ibm.com" <balbir@...ux.vnet.ibm.com>,
h-shimamoto@...jp.nec.com, linux-kernel@...r.kernel.org,
Daisuke Nishimura <nishimura@....nes.nec.co.jp>
Subject: [BUGFIX][PATCH -mmotm] memcg: don't do INIT_WORK() repeatedly
against the same work_struct
This is a fix for memcg-coalesce-charging-via-percpu-storage.patch,
and can be applied after memcg-coalesce-charging-via-percpu-storage-fix.patch.
===
From: Daisuke Nishimura <nishimura@....nes.nec.co.jp>
Don't do INIT_WORK() repeatedly against the same work_struct.
It can actually lead to a BUG.
Just do it once in initialization.
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
Signed-off-by: Daisuke Nishimura <nishimura@....nes.nec.co.jp>
---
mm/memcontrol.c | 13 ++++++++-----
1 files changed, 8 insertions(+), 5 deletions(-)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index f850941..bf02bea 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1349,8 +1349,8 @@ static void drain_all_stock_async(void)
/* This function is for scheduling "drain" in asynchronous way.
* The result of "drain" is not directly handled by callers. Then,
* if someone is calling drain, we don't have to call drain more.
- * Anyway, work_pending() will catch if there is a race. We just do
- * loose check here.
+ * Anyway, WORK_STRUCT_PENDING check in queue_work_on() will catch if
+ * there is a race. We just do loose check here.
*/
if (atomic_read(&memcg_drain_count))
return;
@@ -1359,9 +1359,6 @@ static void drain_all_stock_async(void)
get_online_cpus();
for_each_online_cpu(cpu) {
struct memcg_stock_pcp *stock = &per_cpu(memcg_stock, cpu);
- if (work_pending(&stock->work))
- continue;
- INIT_WORK(&stock->work, drain_local_stock);
schedule_work_on(cpu, &stock->work);
}
put_online_cpus();
@@ -3327,11 +3324,17 @@ mem_cgroup_create(struct cgroup_subsys *ss, struct cgroup *cont)
/* root ? */
if (cont->parent == NULL) {
+ int cpu;
enable_swap_cgroup();
parent = NULL;
root_mem_cgroup = mem;
if (mem_cgroup_soft_limit_tree_init())
goto free_out;
+ for_each_possible_cpu(cpu) {
+ struct memcg_stock_pcp *stock =
+ &per_cpu(memcg_stock, cpu);
+ INIT_WORK(&stock->work, drain_local_stock);
+ }
hotcpu_notifier(memcg_stock_cpu_callback, 0);
} else {
--
1.5.6.1
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists