lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <50AA3FEF.2070100@parallels.com>
Date:	Mon, 19 Nov 2012 18:19:27 +0400
From:	Glauber Costa <glommer@...allels.com>
To:	David Rientjes <rientjes@...gle.com>
CC:	Anton Vorontsov <anton.vorontsov@...aro.org>,
	"Kirill A. Shutemov" <kirill@...temov.name>,
	Pekka Enberg <penberg@...nel.org>,
	Mel Gorman <mgorman@...e.de>,
	Leonid Moiseichuk <leonid.moiseichuk@...ia.com>,
	KOSAKI Motohiro <kosaki.motohiro@...il.com>,
	Minchan Kim <minchan@...nel.org>,
	Bartlomiej Zolnierkiewicz <b.zolnierkie@...sung.com>,
	John Stultz <john.stultz@...aro.org>, <linux-mm@...ck.org>,
	<linux-kernel@...r.kernel.org>, <linaro-kernel@...ts.linaro.org>,
	<patches@...aro.org>, <kernel-team@...roid.com>,
	<linux-man@...r.kernel.org>,
	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
	Michal Hocko <mhocko@...e.cz>,
	Johannes Weiner <hannes@...xchg.org>, Tejun Heo <tj@...nel.org>
Subject: Re: [RFC v3 0/3] vmpressure_fd: Linux VM pressure notifications


>>> Umm, why do users of cpusets not want to be able to trigger memory 
>>> pressure notifications?
>>>
>> Because cpusets only deal with memory placement, not memory usage.
> 
> The set of nodes that a thread is allowed to allocate from may face memory 
> pressure up to and including oom while the rest of the system may have a 
> ton of free memory.  Your solution is to compile and mount memcg if you 
> want notifications of memory pressure on those nodes.  Others in this 
> thread have already said they don't want to rely on memcg for any of this 
> and, as Anton showed, this can be tied directly into the VM without any 
> help from memcg as it sits today.  So why implement a simple and clean 
> mempressure cgroup that can be used alone or co-existing with either memcg 
> or cpusets?
> 

Forgot this one:

Because there is a huge ongoing work going on by Tejun aiming at
reducing the effects of orthogonal hierarchy. There are many controllers
today that are "close enough" to each other (cpu, cpuacct; net_prio,
net_cls), and in practice, it brought more problems than it solved.

So yes, *maybe* mempressure is the answer, but it need to be justified
with care. Long term, I think a saner notification API for memcg will
lead us to a better and brighter future.

There is also yet another aspect: This scheme works well for global
notifications. If we would always want this to be global, this would
work neatly. But as already mentioned in this thread, at some point
we'll want this to work for a group of processes as well. At that point,
you'll have to count how much memory is being used, so you can determine
whether or not pressure is going on. You will, then, have to redo all
the work memcg already does.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ