lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 3 Feb 2014 11:18:23 -0500
From:	Johannes Weiner <hannes@...xchg.org>
To:	Michal Hocko <mhocko@...e.cz>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
	LKML <linux-kernel@...r.kernel.org>, linux-mm@...ck.org
Subject: Re: [RFC 4/5] memcg: make sure that memcg is not offline when
 charging

On Mon, Feb 03, 2014 at 02:33:13PM +0100, Michal Hocko wrote:
> On Thu 30-01-14 12:29:06, Johannes Weiner wrote:
> > On Tue, Dec 17, 2013 at 04:45:29PM +0100, Michal Hocko wrote:
> > > The current charge path might race with memcg offlining because holding
> > > css reference doesn't stop css offline. As a result res counter might be
> > > charged after mem_cgroup_reparent_charges (called from memcg css_offline
> > > callback) and so the charge would never be freed. This has been worked
> > > around by 96f1c58d8534 (mm: memcg: fix race condition between memcg
> > > teardown and swapin) which tries to catch such a leaked charges later
> > > during css_free. It is more optimal to heal this race in the long term
> > > though.
> > 
> > We already deal with the race, so IMO the only outstanding improvement
> > is to take advantage of the teardown synchronization provided by the
> > cgroup core and get rid of our one-liner workaround in .css_free.
> 
> I am not sure I am following you here. Which teardown synchronization do
> you have in mind? rcu_read_lock & css_tryget?

Yes.  It provides rcu synchronization between establishing new
references and offlining, as long as you establish references
atomically in one RCU read-side section:

repeat:
  rcu_read_lock()
  css_tryget()
  res_counter_charge()
  rcu_read_unlock()
  if retries++ < RECLAIM_RETRIES:
    reclaim
    goto repeat

> > > In order to make this raceless we would need to hold rcu_read_lock since
> > > css_tryget until res_counter_charge. This is not so easy unfortunately
> > > because mem_cgroup_do_charge might sleep so we would need to do drop rcu
> > > lock and do css_tryget tricks after each reclaim.
> > 
> > Yes, why not?
> 
> Although css_tryget is cheap these days I thought that a simple flag
> check would be even heaper in this hot path. Changing the patch to use
> css_tryget rather than offline check is trivial if you really think it
> is better?

You already changed it to do css_tryget() on every single charge.

Direct reclaim is invoked only from a fraction of all charges, it's
already a slowpath, I don't think another percpu counter op will be
the final straw that makes this path too fat.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ