lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 24 Oct 2017 23:51:30 -0700
From:   Greg Thelen <>
To:     Michal Hocko <>,
        Johannes Weiner <>
Cc:     Shakeel Butt <>,
        Alexander Viro <>,
        Vladimir Davydov <>,
        Andrew Morton <>,
        Linux MM <>,,
        LKML <>
Subject: Re: [PATCH] fs, mm: account filp and names caches to kmemcg

Michal Hocko <> wrote:

> On Tue 24-10-17 14:58:54, Johannes Weiner wrote:
>> On Tue, Oct 24, 2017 at 07:55:58PM +0200, Michal Hocko wrote:
>> > On Tue 24-10-17 13:23:30, Johannes Weiner wrote:
>> > > On Tue, Oct 24, 2017 at 06:22:13PM +0200, Michal Hocko wrote:
>> > [...]
>> > > > What would prevent a runaway in case the only process in the memcg is
>> > > > oom unkillable then?
>> > > 
>> > > In such a scenario, the page fault handler would busy-loop right now.
>> > > 
>> > > Disabling oom kills is a privileged operation with dire consequences
>> > > if used incorrectly. You can panic the kernel with it. Why should the
>> > > cgroup OOM killer implement protective semantics around this setting?
>> > > Breaching the limit in such a setup is entirely acceptable.
>> > > 
>> > > Really, I think it's an enormous mistake to start modeling semantics
>> > > based on the most contrived and non-sensical edge case configurations.
>> > > Start the discussion with what is sane and what most users should
>> > > optimally experience, and keep the cornercases simple.
>> > 
>> > I am not really seeing your concern about the semantic. The most
>> > important property of the hard limit is to protect from runaways and
>> > stop them if they happen. Users can use the softer variant (high limit)
>> > if they are not afraid of those scenarios. It is not so insane to
>> > imagine that a master task (which I can easily imagine would be oom
>> > disabled) has a leak and runaway as a result.
>> Then you're screwed either way. Where do you return -ENOMEM in a page
>> fault path that cannot OOM kill anything? Your choice is between
>> maintaining the hard limit semantics or going into an infinite loop.
> in the PF path yes. And I would argue that this is a reasonable
> compromise to provide the gurantee the hard limit is giving us (and
> the resulting isolation which is the whole point). Btw. we are already
> having that behavior. All we are talking about is the non-PF path which
> ENOMEMs right now and the meta-patch tried to handle it more gracefully
> and only ENOMEM when there is no other option.
>> I fail to see how this setup has any impact on the semantics we pick
>> here. And even if it were real, it's really not what most users do.
> sure, such a scenario is really on the edge but my main point was that
> the hard limit is an enforcement of an isolation guarantee (as much as
> possible of course).
>> > We are not talking only about the page fault path. There are other
>> > allocation paths to consume a lot of memory and spill over and break
>> > the isolation restriction. So it makes much more sense to me to fail
>> > the allocation in such a situation rather than allow the runaway to
>> > continue. Just consider that such a situation shouldn't happen in
>> > the first place because there should always be an eligible task to
>> > kill - who would own all the memory otherwise?
>> Okay, then let's just stick to the current behavior.
> I am definitely not pushing that thing right now. It is good to discuss
> it, though. The more kernel allocations we will track the more careful we
> will have to be. So maybe we will have to reconsider the current
> approach. I am not sure we need it _right now_ but I feel we will
> eventually have to reconsider it.

The kernel already attempts to charge radix_tree_nodes.  If they fail
then we fallback to unaccounted memory.  So the memcg limit already
isn't an air tight constraint.

I agree that unchecked overcharging could be bad, but wonder if we could
overcharge kmem so long as there is a pending oom kill victim.  If
current is the victim or no victim, then fail allocations (as is
currently done).  The current thread can loop in syscall exit until
usage is reconciled (either via reclaim or kill).  This seems consistent
with pagefault oom handling and compatible with overcommit use case.

Here's an example of an overcommit case we've found quite useful.  Memcg A has
memory which is shared between children B and C.  B is more important the C.
B and C are unprivileged, neither has the authority to kill the other.

    /A(limit=100MB) - B(limit=80MB,prio=high)
                     \ C(limit=80MB,prio=low)

If memcg charge drives B.usage+C.usage>=A.limit, then C should be killed due to
its low priority.  B pagefault can kill, but if a syscall returns ENOMEM then B
can't do anything useful with it.

I know there are related oom killer victim selections discussions afoot.
Even with classic oom_score_adj killing it's possible to heavily bias
oom killer to select C over B.

Powered by blists - more mailing lists