lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 19 Dec 2011 13:12:55 +0100
From:	Michal Hocko <mhocko@...e.cz>
To:	Anton Vorontsov <anton.vorontsov@...aro.org>
Cc:	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
	Arve Hjønnevåg <arve@...roid.com>,
	Rik van Riel <riel@...hat.com>, Pavel Machek <pavel@....cz>,
	Greg Kroah-Hartman <gregkh@...e.de>,
	Andrew Morton <akpm@...ux-foundation.org>,
	David Rientjes <rientjes@...gle.com>,
	John Stultz <john.stultz@...aro.org>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org, Johannes Weiner <hannes@...xchg.org>,
	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
Subject: Re: Android low memory killer vs. memory pressure notifications

[Didn't get to the patch yet but a comment on memcg]

On Mon 19-12-11 06:53:28, Anton Vorontsov wrote:
[...]
> - Use memory controller cgroup (CGROUP_MEM_RES_CTLR) notifications from
>   the kernel side, plus userland "manager" that would kill applications.
> 
>   The main downside of this approach is that mem_cg needs 20 bytes per
>   page (on a 32 bit machine). So on a 32 bit machine with 4K pages
>   that's approx. 0.5% of RAM, or, in other words, 5MB on a 1GB machine.

page_cgroup is 16B per page and with the current Johannes' memcg
naturalization work (in the mmotm tree) we are down to 8B per page (we
got rid of lru). Kamezawa has some patches to get rid of the flags so we
will be down to 4B per page on 32b. Is this still too much?
I would be really careful about a yet another lowmem notification
mechanism.

>   0.5% doesn't sound too bad, but 5MB does, quite a little bit. So,
>   mem_cg feels like an overkill for this simple task (see the driver at
>   the very bottom).

Why is it an overkill? I think that having 2 groups (active and
inactive) and move tasks between then sounds quite elegant. You can
implement an user space oom handler in both groups (active will just
move a task to the inactive group which inactive will kill a task which
hasn't been used for the longest time).
-- 
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9    
Czech Republic
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ