lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 10 Oct 2017 10:17:33 -0400
From:   Johannes Weiner <hannes@...xchg.org>
To:     Michal Hocko <mhocko@...nel.org>
Cc:     Greg Thelen <gthelen@...gle.com>,
        Shakeel Butt <shakeelb@...gle.com>,
        Alexander Viro <viro@...iv.linux.org.uk>,
        Vladimir Davydov <vdavydov.dev@...il.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Linux MM <linux-mm@...ck.org>, linux-fsdevel@...r.kernel.org,
        LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] fs, mm: account filp and names caches to kmemcg

On Tue, Oct 10, 2017 at 11:14:30AM +0200, Michal Hocko wrote:
> On Mon 09-10-17 16:26:13, Johannes Weiner wrote:
> > It's consistent in the sense that only page faults enable the memcg
> > OOM killer. It's not the type of memory that decides, it's whether the
> > allocation context has a channel to communicate an error to userspace.
> > 
> > Whether userspace is able to handle -ENOMEM from syscalls was a voiced
> > concern at the time this patch was merged, although there haven't been
> > any reports so far,
> 
> Well, I remember reports about MAP_POPULATE breaking or at least having
> an unexpected behavior.

Hm, that slipped past me. Did we do something about these? Or did they
fix userspace?

> Well, we should be able to do that with the oom_reaper. At least for v2
> which doesn't have synchronous userspace oom killing.

I don't see how the OOM reaper is a guarantee as long as we have this:

	if (!down_read_trylock(&mm->mmap_sem)) {
		ret = false;
		trace_skip_task_reaping(tsk->pid);
		goto unlock_oom;
	}

What do you mean by 'v2'?

> > > c) Overcharge kmem to oom memcg and queue an async memcg limit checker,
> > >    which will oom kill if needed.
> > 
> > This makes the most sense to me. Architecturally, I imagine this would
> > look like b), with an OOM handler at the point of return to userspace,
> > except that we'd overcharge instead of retrying the syscall.
> 
> I do not think we should break the hard limit semantic if possible. We
> can currently allow that for allocations which are very short term (oom
> victims) or too important to fail but allowing that for kmem charges in
> general sounds like too easy to runaway.

I'm not sure there is a convenient way out of this.

If we want to respect the hard limit AND guarantee allocation success,
the OOM killer has to free memory reliably - which it doesn't. But if
it did, we could also break the limit temporarily and have the OOM
killer replenish the pool before that userspace app can continue. The
allocation wouldn't have to be short-lived, since memory is fungible.

Until the OOM killer is 100% reliable, we have the choice between
sometimes deadlocking the cgroup tasks and everything that interacts
with them, returning -ENOMEM for syscalls, or breaking the hard limit
guarantee during memcg OOM.

It seems breaking the limit temporarily in order to reclaim memory is
the best option. There is kernel memory we don't account to the memcg
already because we think it's probably not going to be significant, so
the isolation isn't 100% watertight in the first place. And I'd rather
have the worst-case effect of a cgroup OOMing be spilling over its
hard limit than deadlocking things inside and outside the cgroup.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ