[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20091009165002.629a91d2.akpm@linux-foundation.org>
Date: Fri, 9 Oct 2009 16:50:02 -0700
From: Andrew Morton <akpm@...ux-foundation.org>
To: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
Cc: "linux-mm@...ck.org" <linux-mm@...ck.org>,
"balbir@...ux.vnet.ibm.com" <balbir@...ux.vnet.ibm.com>,
"nishimura@....nes.nec.co.jp" <nishimura@....nes.nec.co.jp>,
h-shimamoto@...jp.nec.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] memcg: coalescing charge by percpu (Oct/9)
On Fri, 9 Oct 2009 17:01:05 +0900
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com> wrote:
> +static void drain_all_stock_async(void)
> +{
> + int cpu;
> + /* This function is for scheduling "drain" in asynchronous way.
> + * The result of "drain" is not directly handled by callers. Then,
> + * if someone is calling drain, we don't have to call drain more.
> + * Anyway, work_pending() will catch if there is a race. We just do
> + * loose check here.
> + */
> + if (atomic_read(&memcg_drain_count))
> + return;
> + /* Notify other cpus that system-wide "drain" is running */
> + atomic_inc(&memcg_drain_count);
> + get_online_cpus();
> + for_each_online_cpu(cpu) {
> + struct memcg_stock_pcp *stock = &per_cpu(memcg_stock, cpu);
> + if (work_pending(&stock->work))
> + continue;
> + INIT_WORK(&stock->work, drain_local_stock);
> + schedule_work_on(cpu, &stock->work);
> + }
> + put_online_cpus();
> + atomic_dec(&memcg_drain_count);
> + /* We don't wait for flush_work */
> +}
It's unusual to run INIT_WORK() each time we use a work_struct.
Usually we will run INIT_WORK a single time, then just repeatedly use
that structure. Because after the work has completed, it is still in a
ready-to-use state.
Running INIT_WORK() repeatedly against the same work_struct adds a risk
that we'll scribble on an in-use work_struct, which would make a big
mess.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists