[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130211175619.GC13218@cmpxchg.org>
Date: Mon, 11 Feb 2013 12:56:19 -0500
From: Johannes Weiner <hannes@...xchg.org>
To: Michal Hocko <mhocko@...e.cz>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
Ying Han <yinghan@...gle.com>, Tejun Heo <htejun@...il.com>,
Glauber Costa <glommer@...allels.com>,
Li Zefan <lizefan@...wei.com>
Subject: Re: [PATCH v3 4/7] memcg: remove memcg from the reclaim iterators
On Mon, Feb 11, 2013 at 04:16:49PM +0100, Michal Hocko wrote:
> On Fri 08-02-13 14:33:18, Johannes Weiner wrote:
> [...]
> > for each in hierarchy:
> > for each node:
> > for each zone:
> > for each reclaim priority:
> >
> > every time a cgroup is destroyed. I don't think such a hammer is
> > justified in general, let alone for consolidating code a little.
> >
> > Can we invalidate the position cache lazily? Have a global "cgroup
> > destruction" counter and store a snapshot of that counter whenever we
> > put a cgroup pointer in the position cache. We only use the cached
> > pointer if that counter has not changed in the meantime, so we know
> > that the cgroup still exists.
>
> Currently we have:
> rcu_read_lock() // keeps cgroup links safe
> iter->iter_lock // keeps selection exclusive for a specific iterator
> 1) global_counter == iter_counter
> 2) css_tryget(cached_memcg) // check it is still alive
> rcu_read_unlock()
>
> What would protect us from races when css would disappear between 1 and
> 2?
rcu
> css is invalidated from worker context scheduled from __css_put and it
> is using dentry locking which we surely do not want to pull here. We
> could hook into css_offline which is called with cgroup_mutex but we
> cannot use this one here because it is no longer exported and Tejun
> would kill us for that.
> So we can add a new global memcg internal lock to do this atomically.
> Ohh, this is getting uglier...
A racing final css_put() means that the tryget fails, but our RCU read
lock keeps the CSS allocated. If the dead_count is uptodate, it means
that the rcu read lock was acquired before the synchronize_rcu()
before the css is freed.
> > It is pretty pretty imprecise and we invalidate the whole cache every
> > time a cgroup is destroyed, but I think that should be okay.
>
> I am not sure this is OK because this gives an indirect way of
> influencing reclaim in one hierarchy by another one which opens a door
> for regressions (or malicious over-reclaim in the extreme case).
> So I do not like this very much.
>
> > If not, better ideas are welcome.
>
> Maybe we could keep the counter per memcg but that would mean that we
> would need to go up the hierarchy as well. We wouldn't have to go over
> node-zone-priority cleanup so it would be much more lightweight.
>
> I am not sure this is necessarily better than explicit cleanup because
> it brings yet another kind of generation number to the game but I guess
> I can live with it if people really thing the relaxed way is much
> better.
> What do you think about the patch below (untested yet)?
Better, but I think you can get rid of both locks:
mem_cgroup_iter:
rcu_read_lock()
if atomic_read(&root->dead_count) == iter->dead_count:
smp_rmb()
if tryget(iter->position):
position = iter->position
memcg = find_next(postion)
css_put(position)
iter->position = memcg
smp_wmb() /* Write position cache BEFORE marking it uptodate */
iter->dead_count = atomic_read(&root->dead_count)
rcu_read_unlock()
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists