lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 10 Oct 2017 11:14:30 +0200
From:   Michal Hocko <mhocko@...nel.org>
To:     Johannes Weiner <hannes@...xchg.org>
Cc:     Greg Thelen <gthelen@...gle.com>,
        Shakeel Butt <shakeelb@...gle.com>,
        Alexander Viro <viro@...iv.linux.org.uk>,
        Vladimir Davydov <vdavydov.dev@...il.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Linux MM <linux-mm@...ck.org>, linux-fsdevel@...r.kernel.org,
        LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] fs, mm: account filp and names caches to kmemcg

On Mon 09-10-17 16:26:13, Johannes Weiner wrote:
> On Mon, Oct 09, 2017 at 10:52:44AM -0700, Greg Thelen wrote:
> > Michal Hocko <mhocko@...nel.org> wrote:
> > 
> > > On Fri 06-10-17 12:33:03, Shakeel Butt wrote:
> > >> >>       names_cachep = kmem_cache_create("names_cache", PATH_MAX, 0,
> > >> >> -                     SLAB_HWCACHE_ALIGN|SLAB_PANIC, NULL);
> > >> >> +                     SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_ACCOUNT, NULL);
> > >> >
> > >> > I might be wrong but isn't name cache only holding temporary objects
> > >> > used for path resolution which are not stored anywhere?
> > >> >
> > >> 
> > >> Even though they're temporary, many containers can together use a
> > >> significant amount of transient uncharged memory. We've seen machines
> > >> with 100s of MiBs in names_cache.
> > >
> > > Yes that might be possible but are we prepared for random ENOMEM from
> > > vfs calls which need to allocate a temporary name?
> > >
> > >> 
> > >> >>       filp_cachep = kmem_cache_create("filp", sizeof(struct file), 0,
> > >> >> -                     SLAB_HWCACHE_ALIGN | SLAB_PANIC, NULL);
> > >> >> +                     SLAB_HWCACHE_ALIGN | SLAB_PANIC | SLAB_ACCOUNT, NULL);
> > >> >>       percpu_counter_init(&nr_files, 0, GFP_KERNEL);
> > >> >>  }
> > >> >
> > >> > Don't we have a limit for the maximum number of open files?
> > >> >
> > >> 
> > >> Yes, there is a system limit of maximum number of open files. However
> > >> this limit is shared between different users on the system and one
> > >> user can hog this resource. To cater that, we set the maximum limit
> > >> very high and let the memory limit of each user limit the number of
> > >> files they can open.
> > >
> > > Similarly here. Are all syscalls allocating a fd prepared to return
> > > ENOMEM?
> > >
> > > -- 
> > > Michal Hocko
> > > SUSE Labs
> > 
> > Even before this patch I find memcg oom handling inconsistent.  Page
> > cache pages trigger oom killer and may allow caller to succeed once the
> > kernel retries.  But kmem allocations don't call oom killer.
> 
> It's consistent in the sense that only page faults enable the memcg
> OOM killer. It's not the type of memory that decides, it's whether the
> allocation context has a channel to communicate an error to userspace.
> 
> Whether userspace is able to handle -ENOMEM from syscalls was a voiced
> concern at the time this patch was merged, although there haven't been
> any reports so far,

Well, I remember reports about MAP_POPULATE breaking or at least having
an unexpected behavior.

> and it seemed like the lesser evil between that
> and deadlocking the kernel.

agreed on this part though

> If we could find a way to invoke the OOM killer safely, I would
> welcome such patches.

Well, we should be able to do that with the oom_reaper. At least for v2
which doesn't have synchronous userspace oom killing.

[...]

> > c) Overcharge kmem to oom memcg and queue an async memcg limit checker,
> >    which will oom kill if needed.
> 
> This makes the most sense to me. Architecturally, I imagine this would
> look like b), with an OOM handler at the point of return to userspace,
> except that we'd overcharge instead of retrying the syscall.

I do not think we should break the hard limit semantic if possible. We
can currently allow that for allocations which are very short term (oom
victims) or too important to fail but allowing that for kmem charges in
general sounds like too easy to runaway.

-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ