lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20091013165719.c5781bfa.nishimura@mxp.nes.nec.co.jp>
Date:	Tue, 13 Oct 2009 16:57:19 +0900
From:	Daisuke Nishimura <nishimura@....nes.nec.co.jp>
To:	"KAMEZAWA Hiroyuki" <kamezawa.hiroyu@...fujitsu.com>
Cc:	"Andrew Morton" <akpm@...ux-foundation.org>,
	"linux-mm@...ck.org" <linux-mm@...ck.org>,
	"balbir@...ux.vnet.ibm.com" <balbir@...ux.vnet.ibm.com>,
	h-shimamoto@...jp.nec.com, linux-kernel@...r.kernel.org,
	Daisuke Nishimura <nishimura@....nes.nec.co.jp>
Subject: Re: [PATCH 2/2] memcg: coalescing charge by percpu (Oct/9)

On Sun, 11 Oct 2009 11:37:35 +0900 (JST), "KAMEZAWA Hiroyuki" <kamezawa.hiroyu@...fujitsu.com> wrote:
> Andrew Morton wrote:
> > On Fri, 9 Oct 2009 17:01:05 +0900
> > KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com> wrote:
> >
> >> +static void drain_all_stock_async(void)
> >> +{
> >> +	int cpu;
> >> +	/* This function is for scheduling "drain" in asynchronous way.
> >> +	 * The result of "drain" is not directly handled by callers. Then,
> >> +	 * if someone is calling drain, we don't have to call drain more.
> >> +	 * Anyway, work_pending() will catch if there is a race. We just do
> >> +	 * loose check here.
> >> +	 */
> >> +	if (atomic_read(&memcg_drain_count))
> >> +		return;
> >> +	/* Notify other cpus that system-wide "drain" is running */
> >> +	atomic_inc(&memcg_drain_count);
Shouldn't we use atomic_inc_not_zero() ?
(Do you mean this problem by "is not very good" below ?)


Thanks,
Daisuke Nishimura.

> >> +	get_online_cpus();
> >> +	for_each_online_cpu(cpu) {
> >> +		struct memcg_stock_pcp *stock = &per_cpu(memcg_stock, cpu);
> >> +		if (work_pending(&stock->work))
> >> +			continue;
> >> +		INIT_WORK(&stock->work, drain_local_stock);
> >> +		schedule_work_on(cpu, &stock->work);
> >> +	}
> >> + 	put_online_cpus();
> >> +	atomic_dec(&memcg_drain_count);
> >> +	/* We don't wait for flush_work */
> >> +}
> >
> > It's unusual to run INIT_WORK() each time we use a work_struct.
> > Usually we will run INIT_WORK a single time, then just repeatedly use
> > that structure.  Because after the work has completed, it is still in a
> > ready-to-use state.
> >
> > Running INIT_WORK() repeatedly against the same work_struct adds a risk
> > that we'll scribble on an in-use work_struct, which would make a big
> > mess.
> >
> Ah, ok. I'll prepare a fix. (And I think atomic_dec/inc placement is not
> very good....I'll do total review, again.)
> 
> Thank you for review.
> 
> Regards,
> -Kame
> 
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ