lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 11 Mar 2016 16:45:34 +0300
From:	Vladimir Davydov <vdavydov@...tuozzo.com>
To:	Michal Hocko <mhocko@...nel.org>
CC:	Andrew Morton <akpm@...ux-foundation.org>,
	Johannes Weiner <hannes@...xchg.org>, <linux-mm@...ck.org>,
	<linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] mm: memcontrol: zap
 task_struct->memcg_oom_{gfp_mask,order}

On Fri, Mar 11, 2016 at 01:51:05PM +0100, Michal Hocko wrote:
> On Fri 11-03-16 15:39:00, Vladimir Davydov wrote:
> > On Fri, Mar 11, 2016 at 12:54:50PM +0100, Michal Hocko wrote:
> > > On Fri 11-03-16 13:12:47, Vladimir Davydov wrote:
> > > > These fields are used for dumping info about allocation that triggered
> > > > OOM. For cgroup this information doesn't make much sense, because OOM
> > > > killer is always invoked from page fault handler.
> > > 
> > > The oom killer is indeed invoked in a different context but why printing
> > > the original mask and order doesn't make any sense? Doesn't it help to
> > > see that the reclaim has failed because of GFP_NOFS?
> > 
> > I don't see how this can be helpful. How would you use it?
> 
> If we start seeing GFP_NOFS triggered OOMs we might be enforced to
> rethink our current strategy to ignore this charge context for OOM.

IMO the fact that a lot of OOMs are triggered by GFP_NOFS allocations
can't be a good enough reason to reconsider OOM strategy. We need to
know what kind of allocation fails anyway, and the current OOM dump
gives us no clue about that.

Besides, what if OOM was triggered by GFP_NOFS by pure chance, i.e. it
would have been triggered by GFP_KERNEL if it had happened at that time?
IMO it's just confusing.

>  
> > Wouldn't it be better to print err msg in try_charge anyway?
> 
> Wouldn't that lead to excessive amount of logged messages?

We could ratelimit these messages. Slab charge failures are already
reported to dmesg (see ___slab_alloc -> slab_out_of_memory) and nobody's
complained so far. Are there any non-slab GFP_NOFS allocations charged
to memcg?

Thanks,
Vladimir

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ