lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Thu, 13 Feb 2014 17:12:46 +0100
From:	Michal Hocko <mhocko@...e.cz>
To:	Roman Gushchin <klamm@...dex-team.ru>
Cc:	linux-mm@...ck.org, Johannes Weiner <hannes@...xchg.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
	LKML <linux-kernel@...r.kernel.org>,
	Ying Han <yinghan@...gle.com>, Hugh Dickins <hughd@...gle.com>,
	Michel Lespinasse <walken@...gle.com>,
	Greg Thelen <gthelen@...gle.com>,
	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
	Tejun Heo <tj@...nel.org>
Subject: Re: [RFC 0/4] memcg: Low-limit reclaim

On Wed 12-02-14 16:28:36, Roman Gushchin wrote:
> Hi, Michal!
> 
> Sorry for a long reply.
> 
> At Wed, 29 Jan 2014 19:22:59 +0100,
> Michal Hocko wrote:
> > > As you can remember, I've proposed to introduce low limits about a year ago.
> > > 
> > > We had a small discussion at that time: http://marc.info/?t=136195226600004 .
> > 
> > yes I remember that discussion and vaguely remember the proposed
> > approach. I really wanted to prevent from introduction of a new knob but
> > things evolved differently than I planned since then and it turned out
> > that the knew knob is unavoidable. That's why I came with this approach
> > which is quite different from yours AFAIR.
> >  
> > > Since that time we intensively use low limits in our production
> > > (on thousands of machines). So, I'm very interested to merge this
> > > functionality into upstream.
> > 
> > Have you tried to use this implementation? Would this work as well?
> > My very vague recollection of your patch is that it didn't cover both
> > global and target reclaims and it didn't fit into the reclaim very
> > naturally it used its own scaling method. I will have to refresh my
> > memory though.
> 
> IMHO, the main problem of your implementation is the following: 
> the number of reclaimed pages is not limited at all,
> if cgroup is over it's low memory limit. So, a significant number 
> of pages can be reclaimed, even if the memory usage is only a bit 
> (e.g. one page) above the low limit.

Yes but this is the same problem as with the regular reclaim.
We do not have any guarantee that we will reclaim only the required
amount of memory. As the reclaim priority falls down we can overreclaim.
The global reclaim tries to avoid this problem by keeping the priority
as high as possible. And the target reclaim is not a big deal because we
are limiting the number of reclaimed pages to the swap cluster.

I do not see this as a practical problem of the low_limit though,
because it protects those that are below the limit not above. Small
fluctuation around the limit should be tolerable.

> In my case, this problem is solved by scaling the number of scanned pages.
> 
> I think, an ideal solution is to limit the number of reclaimed pages by 
> low limit excess value. This allows to discard my scaling code, but save
> the strict semantics of low limit under memory pressure. The main problem 
> here is how to balance scanning pressure between cgroups and LRUs.
> 
> Maybe, we should calculate the number of pages to scan in a LRU based on
> the low limit excess value instead of number of pages...

I do not like it much and I expect other mm people to feel similar. We
already have scanning scaling based on the priority. Adding a new
variable into the picture will make the whole thing only more
complicated without a very good reason for it.

[...]
-- 
Michal Hocko
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ