[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20171013065150.dzesflih5ot2z3px@dhcp22.suse.cz>
Date: Fri, 13 Oct 2017 08:51:50 +0200
From: Michal Hocko <mhocko@...nel.org>
To: Greg Thelen <gthelen@...gle.com>
Cc: Johannes Weiner <hannes@...xchg.org>,
Shakeel Butt <shakeelb@...gle.com>,
Alexander Viro <viro@...iv.linux.org.uk>,
Vladimir Davydov <vdavydov.dev@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Linux MM <linux-mm@...ck.org>, linux-fsdevel@...r.kernel.org,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] fs, mm: account filp and names caches to kmemcg
On Thu 12-10-17 16:57:22, Greg Thelen wrote:
[...]
> Overcharging kmem with deferred reconciliation sounds good to me.
>
> A few comments (not reasons to avoid this):
>
> 1) If a task is moved between memcg it seems possible to overcharge
> multiple oom memcg for different kmem/user allocations.
> mem_cgroup_oom_synchronize() would see at most one oom memcg in
> current->memcg_in_oom. Thus it'd only reconcile a single memcg. But
> that seems pretty rare and the next charge to any of the other memcg
> would reconcile them.
This is a general problem for the cgroup v2 memcg oom handling.
> 2) if a kernel thread charges kmem on behalf of a client mm then there
> is no good place to call mem_cgroup_oom_synchronize(), short of
> launching a work item in mem_cgroup_oom(). I don't we have anything
> like that yet. So nothing to worry about.
If we do invoke the OOM killer from the charge path, it shouldn't be a
problem.
> 3) it's debatable if mem_cgroup_oom_synchronize() should first attempt
> reclaim before killing. But that's a whole 'nother thread.
Again, this shouldn't be an issue if we invoke the oom killer from the
charge path.
> 4) overcharging with deferred reconciliation could also be used for user
> pages. But I haven't looked at the code long enough to know if this
> would be a net win.
It would solve g-u-p issues failing with ENOMEM unexpectedly just
because of memcg charge failure.
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists