lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180911154735.GC28828@tower.DHCP.thefacebook.com>
Date:   Tue, 11 Sep 2018 08:47:35 -0700
From:   Roman Gushchin <guro@...com>
To:     Johannes Weiner <hannes@...xchg.org>
CC:     <linux-mm@...ck.org>, <linux-kernel@...r.kernel.org>,
        <kernel-team@...com>, Michal Hocko <mhocko@...nel.org>,
        Vladimir Davydov <vdavydov.dev@...il.com>
Subject: Re: [PATCH RFC] mm: don't raise MEMCG_OOM event due to failed
 high-order allocation

On Tue, Sep 11, 2018 at 08:43:03AM -0400, Johannes Weiner wrote:
> On Mon, Sep 10, 2018 at 02:56:22PM -0700, Roman Gushchin wrote:
> > The memcg OOM killer is never invoked due to a failed high-order
> > allocation, however the MEMCG_OOM event can be easily raised.
> 
> Wasn't the same also true for kernel allocations until recently? We'd
> signal MEMCG_OOM and then return -ENOMEM.

Well, assuming that it's normal for a cgroup to have its memory usage
about the memory.max border, that sounds strange.

> 
> > Under some memory pressure it can happen easily because of a
> > concurrent allocation. Let's look at try_charge(). Even if we were
> > able to reclaim enough memory, this check can fail due to a race
> > with another allocation:
> > 
> >     if (mem_cgroup_margin(mem_over_limit) >= nr_pages)
> >         goto retry;
> > 
> > For regular pages the following condition will save us from triggering
> > the OOM:
> > 
> >    if (nr_reclaimed && nr_pages <= (1 << PAGE_ALLOC_COSTLY_ORDER))
> >        goto retry;
> > 
> > But for high-order allocation this condition will intentionally fail.
> > The reason behind is that we'll likely fall to regular pages anyway,
> > so it's ok and even preferred to return ENOMEM.
> 
> These seem to be more implementation details than anything else.
> 
> Personally, I'm confused by the difference between the "oom" and
> "oom_kill" events, and I don't understand when you would be interested
> in one and when in the other. The difference again seems to be mostly
> implementation details.
> 
> But the definition of "oom"/MEMCG_OOM in cgroup-v2.rst applies to the
> situation of failing higher-order allocations. I'm not per-se against
> changing the semantics here, as I don't think they are great. But can
> you please start out with rewriting the definition in a way that shows
> the practical difference for users?
> 
> The original idea behind MEMCG_OOM was to signal when reclaim had
> failed and we defer to the oom killer. The oom killer may or may not
> kill anything, which is the case for higher order allocations, but
> that doesn't change the out-of-memory situation that has occurred.
> 
> Konstantin added the OOM_KILL events to count actual kills. It seems
> to me that this has much more practical applications than the more
> theoretical OOM, since users care more about kills and not necessarily
> about "reclaim failed (but i might have been able to handle it with
> retries and fallback allocations, and so there isn't an actual issue".
> 
> Is there a good reason for keeping OOM now that we have OOM_KILL?

I totally agree that oom_kill is more useful, and I did propose to
convert existing oom counter into oom_kill semantics back to time when
Konstantin's patch was discussed. So, I'm not arguing here that having two
counter is really useful, I've expressed the opposite meaning from scratch.

However I'm not sure if it's not too late to remove the oom event.
But if it is too late, let's make it less confusing.

Definition of the oom event in docs is quite broad, so both current
behavior and proposed change will fit. So it's not a semantics change
at all, pure implementation details.

Let's agree that oom event should not indicate a "random" allocation
failure, but one caused by high memory pressure. Otherwise it's really
a alloc_failure counter, which has to be moved to memory.stat.

Thanks!

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ