[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <72e9a96ea399491948f396dab01b4c77.squirrel@webmail-b.css.fujitsu.com>
Date: Sun, 11 Oct 2009 11:37:35 +0900 (JST)
From: "KAMEZAWA Hiroyuki" <kamezawa.hiroyu@...fujitsu.com>
To: "Andrew Morton" <akpm@...ux-foundation.org>
Cc: "KAMEZAWA Hiroyuki" <kamezawa.hiroyu@...fujitsu.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"balbir@...ux.vnet.ibm.com" <balbir@...ux.vnet.ibm.com>,
"nishimura@....nes.nec.co.jp" <nishimura@....nes.nec.co.jp>,
h-shimamoto@...jp.nec.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] memcg: coalescing charge by percpu (Oct/9)
Andrew Morton wrote:
> On Fri, 9 Oct 2009 17:01:05 +0900
> KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com> wrote:
>
>> +static void drain_all_stock_async(void)
>> +{
>> + int cpu;
>> + /* This function is for scheduling "drain" in asynchronous way.
>> + * The result of "drain" is not directly handled by callers. Then,
>> + * if someone is calling drain, we don't have to call drain more.
>> + * Anyway, work_pending() will catch if there is a race. We just do
>> + * loose check here.
>> + */
>> + if (atomic_read(&memcg_drain_count))
>> + return;
>> + /* Notify other cpus that system-wide "drain" is running */
>> + atomic_inc(&memcg_drain_count);
>> + get_online_cpus();
>> + for_each_online_cpu(cpu) {
>> + struct memcg_stock_pcp *stock = &per_cpu(memcg_stock, cpu);
>> + if (work_pending(&stock->work))
>> + continue;
>> + INIT_WORK(&stock->work, drain_local_stock);
>> + schedule_work_on(cpu, &stock->work);
>> + }
>> + put_online_cpus();
>> + atomic_dec(&memcg_drain_count);
>> + /* We don't wait for flush_work */
>> +}
>
> It's unusual to run INIT_WORK() each time we use a work_struct.
> Usually we will run INIT_WORK a single time, then just repeatedly use
> that structure. Because after the work has completed, it is still in a
> ready-to-use state.
>
> Running INIT_WORK() repeatedly against the same work_struct adds a risk
> that we'll scribble on an in-use work_struct, which would make a big
> mess.
>
Ah, ok. I'll prepare a fix. (And I think atomic_dec/inc placement is not
very good....I'll do total review, again.)
Thank you for review.
Regards,
-Kame
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists