lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 5 Jun 2014 16:51:09 +0200
From:	Michal Hocko <mhocko@...e.cz>
To:	Johannes Weiner <hannes@...xchg.org>
Cc:	Hugh Dickins <hughd@...gle.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
	Greg Thelen <gthelen@...gle.com>,
	Michel Lespinasse <walken@...gle.com>,
	Tejun Heo <tj@...nel.org>,
	Roman Gushchin <klamm@...dex-team.ru>,
	LKML <linux-kernel@...r.kernel.org>, linux-mm@...ck.org,
	Rik van Riel <riel@...hat.com>
Subject: Re: [PATCH v2 0/4] memcg: Low-limit reclaim

On Wed 04-06-14 17:45:53, Johannes Weiner wrote:
> On Wed, Jun 04, 2014 at 12:18:59PM -0700, Hugh Dickins wrote:
> > On Wed, 4 Jun 2014, Johannes Weiner wrote:
> > > On Wed, Jun 04, 2014 at 04:46:58PM +0200, Michal Hocko wrote:
> > > > 
> > > > In the other email I have suggested to add a knob with the configurable
> > > > default. Would you be OK with that?
> > > 
> > > No, I want to agree on whether we need that fallback code or not.  I'm
> > > not interested in merging code that you can't convince anybody else is
> > > needed.
> > 
> > I for one would welcome such a knob as Michal is proposing.
> 
> Now we have a tie :-)
> 
> > I thought it was long ago agreed that the low limit was going to fallback
> > when it couldn't be satisfied.  But you seem implacably opposed to that
> > as default, and I can well believe that Google is so accustomed to OOMing
> > that it is more comfortable with OOMing as the default.  Okay.  But I
> > would expect there to be many who want the attempt towards isolation that
> > low limit offers, without a collapse to OOM at the first misjudgement.
> 
> At the same time, I only see users like Google pushing the limits of
> the machine to a point where guarantees cover north of 90% of memory.

I can think of in-memory database loads which would use the reclaim
protection which is quite high as well (say 80% of available memory).
Those would definitely like to see ephemeral reclaim rather than OOM.

> I would expect more casual users to work with much smaller guarantees,
> and a good chunk of slack on top - otherwise they already had better
> be set up for the occasional OOM.  Is this an unreasonable assumption
> to make?
> 
> I'm not opposed to this feature per se, but I'm really opposed to
> merging it for the partial hard bindings argument

This was just an example that even setup which is not overcomiting the
limit might be caught in an unreclaimable position. Sure we can mitigate
those issues to some point and that would be surely welcome.

The more important part, however, is that not all usecases really
_require_ hard guarantee. They are asking for a reasonable memory
isolation which they currently do not have. Having a risk of OOM would
be a no-go for them so the feature wouldn't be useful for them.

I have repeatedly said that I can see also some use for the hard
guarantee. Mainly to support overcommit on the limit. I didn't hear
about those usecases yet but it seems that at least Google would like to
have really hard guarantees.

So I think the best way forward is to have a configurable default and
per-memcg knob.

> and for papering over deficiencies in our reclaim code, because I
> don't want any of that in the changelog, in the documentation, or in
> what we otherwise tell users about it.


-- 
Michal Hocko
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ