lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 25 Oct 2017 09:15:22 +0200
From:   Michal Hocko <mhocko@...nel.org>
To:     Greg Thelen <gthelen@...gle.com>
Cc:     Johannes Weiner <hannes@...xchg.org>,
        Shakeel Butt <shakeelb@...gle.com>,
        Alexander Viro <viro@...iv.linux.org.uk>,
        Vladimir Davydov <vdavydov.dev@...il.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Linux MM <linux-mm@...ck.org>, linux-fsdevel@...r.kernel.org,
        LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] fs, mm: account filp and names caches to kmemcg

On Tue 24-10-17 23:51:30, Greg Thelen wrote:
> Michal Hocko <mhocko@...nel.org> wrote:
[...]
> > I am definitely not pushing that thing right now. It is good to discuss
> > it, though. The more kernel allocations we will track the more careful we
> > will have to be. So maybe we will have to reconsider the current
> > approach. I am not sure we need it _right now_ but I feel we will
> > eventually have to reconsider it.
> 
> The kernel already attempts to charge radix_tree_nodes.  If they fail
> then we fallback to unaccounted memory. 

I am not sure which code path you have in mind. All I can see is that we
drop __GFP_ACCOUNT when preloading radix tree nodes. Anyway...

> So the memcg limit already
> isn't an air tight constraint.

... we shouldn't make it more loose though.

> I agree that unchecked overcharging could be bad, but wonder if we could
> overcharge kmem so long as there is a pending oom kill victim.

Why is this any better than simply trying to charge as long as the oom
killer makes progress?

> If
> current is the victim or no victim, then fail allocations (as is
> currently done).

we actually force the charge in that case so we will proceed.

> The current thread can loop in syscall exit until
> usage is reconciled (either via reclaim or kill).  This seems consistent
> with pagefault oom handling and compatible with overcommit use case.

But we do not really want to make the syscall exit path any more complex
or more expensive than it is. The point is that we shouldn't be afraid
about triggering the oom killer from the charge patch because we do have
async OOM killer. This is very same with the standard allocator path. So
why should be memcg any different?

> Here's an example of an overcommit case we've found quite useful.  Memcg A has
> memory which is shared between children B and C.  B is more important the C.
> B and C are unprivileged, neither has the authority to kill the other.
> 
>     /A(limit=100MB) - B(limit=80MB,prio=high)
>                      \ C(limit=80MB,prio=low)
> 
> If memcg charge drives B.usage+C.usage>=A.limit, then C should be killed due to
> its low priority.  B pagefault can kill, but if a syscall returns ENOMEM then B
> can't do anything useful with it.

well, my proposal was to not return ENOMEM and rather loop in the charge
path and wait for the oom killer to free up some charges. Who gets
killed is really out of scope of this discussion.
-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ