lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 21 Sep 2011 15:51:43 +0200
From:	Johannes Weiner <jweiner@...hat.com>
To:	Michal Hocko <mhocko@...e.cz>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
	Daisuke Nishimura <nishimura@....nes.nec.co.jp>,
	Balbir Singh <bsingharora@...il.com>,
	Ying Han <yinghan@...gle.com>,
	Greg Thelen <gthelen@...gle.com>,
	Michel Lespinasse <walken@...gle.com>,
	Rik van Riel <riel@...hat.com>,
	Minchan Kim <minchan.kim@...il.com>,
	Christoph Hellwig <hch@...radead.org>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org
Subject: Re: [patch 08/11] mm: vmscan: convert global reclaim to per-memcg
 LRU lists

On Wed, Sep 21, 2011 at 03:10:45PM +0200, Michal Hocko wrote:
> On Mon 12-09-11 12:57:25, Johannes Weiner wrote:
> > The global per-zone LRU lists are about to go away on memcg-enabled
> > kernels, global reclaim must be able to find its pages on the
> > per-memcg LRU lists.
> > 
> > Since the LRU pages of a zone are distributed over all existing memory
> > cgroups, a scan target for a zone is complete when all memory cgroups
> > are scanned for their proportional share of a zone's memory.
> > 
> > The forced scanning of small scan targets from kswapd is limited to
> > zones marked unreclaimable, otherwise kswapd can quickly overreclaim
> > by force-scanning the LRU lists of multiple memory cgroups.
> > 
> > Signed-off-by: Johannes Weiner <jweiner@...hat.com>
> 
> Reviewed-by: Michal Hocko <mhocko@...e.cz>

Thanks

> Minor nit bellow

> > @@ -2451,13 +2445,24 @@ unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *mem_cont,
> >  static void age_active_anon(struct zone *zone, struct scan_control *sc,
> >  			    int priority)
> >  {
> > -	struct mem_cgroup_zone mz = {
> > -		.mem_cgroup = NULL,
> > -		.zone = zone,
> > -	};
> > +	struct mem_cgroup *mem;
> > +
> > +	if (!total_swap_pages)
> > +		return;
> > +
> > +	mem = mem_cgroup_iter(NULL, NULL, NULL);
> 
> Wouldn't be for_each_mem_cgroup more appropriate? Macro is not exported
> but probably worth exporting? The same applies for
> scan_zone_unevictable_pages from the previous patch.

Unfortunately, in generic code, these loops need to be layed out like
this for !CONFIG_MEMCG to do the right thing.  mem_cgroup_iter() will
return NULL and the loop has to execute exactly once.

This is something that will go away once we implement Christoph's
suggestion of always having a (skeleton) root_mem_cgroup around, even
for !CONFIG_MEMCG.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ