[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110921131045.GD8501@tiehlicka.suse.cz>
Date: Wed, 21 Sep 2011 15:10:45 +0200
From: Michal Hocko <mhocko@...e.cz>
To: Johannes Weiner <jweiner@...hat.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
Daisuke Nishimura <nishimura@....nes.nec.co.jp>,
Balbir Singh <bsingharora@...il.com>,
Ying Han <yinghan@...gle.com>,
Greg Thelen <gthelen@...gle.com>,
Michel Lespinasse <walken@...gle.com>,
Rik van Riel <riel@...hat.com>,
Minchan Kim <minchan.kim@...il.com>,
Christoph Hellwig <hch@...radead.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [patch 08/11] mm: vmscan: convert global reclaim to per-memcg
LRU lists
On Mon 12-09-11 12:57:25, Johannes Weiner wrote:
> The global per-zone LRU lists are about to go away on memcg-enabled
> kernels, global reclaim must be able to find its pages on the
> per-memcg LRU lists.
>
> Since the LRU pages of a zone are distributed over all existing memory
> cgroups, a scan target for a zone is complete when all memory cgroups
> are scanned for their proportional share of a zone's memory.
>
> The forced scanning of small scan targets from kswapd is limited to
> zones marked unreclaimable, otherwise kswapd can quickly overreclaim
> by force-scanning the LRU lists of multiple memory cgroups.
>
> Signed-off-by: Johannes Weiner <jweiner@...hat.com>
Reviewed-by: Michal Hocko <mhocko@...e.cz>
Minor nit bellow
> ---
> mm/vmscan.c | 39 ++++++++++++++++++++++-----------------
> 1 files changed, 22 insertions(+), 17 deletions(-)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index bb4d8b8..053609e 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -2451,13 +2445,24 @@ unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *mem_cont,
> static void age_active_anon(struct zone *zone, struct scan_control *sc,
> int priority)
> {
> - struct mem_cgroup_zone mz = {
> - .mem_cgroup = NULL,
> - .zone = zone,
> - };
> + struct mem_cgroup *mem;
> +
> + if (!total_swap_pages)
> + return;
> +
> + mem = mem_cgroup_iter(NULL, NULL, NULL);
Wouldn't be for_each_mem_cgroup more appropriate? Macro is not exported
but probably worth exporting? The same applies for
scan_zone_unevictable_pages from the previous patch.
> + do {
> + struct mem_cgroup_zone mz = {
> + .mem_cgroup = mem,
> + .zone = zone,
> + };
>
> - if (inactive_anon_is_low(&mz))
> - shrink_active_list(SWAP_CLUSTER_MAX, &mz, sc, priority, 0);
> + if (inactive_anon_is_low(&mz))
> + shrink_active_list(SWAP_CLUSTER_MAX, &mz,
> + sc, priority, 0);
> +
> + mem = mem_cgroup_iter(NULL, mem, NULL);
> + } while (mem);
> }
--
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9
Czech Republic
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists