[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <47BD48F3.3040903@linux.vnet.ibm.com>
Date: Thu, 21 Feb 2008 15:18:35 +0530
From: Balbir Singh <balbir@...ux.vnet.ibm.com>
To: KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>
CC: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
Rik van Riel <riel@...hat.com>,
Lee Schermerhorn <Lee.Schermerhorn@...com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [RFC][PATCH] the proposal of improve page reclaim by throttle
KOSAKI Motohiro wrote:
> background
> ========================================
> current VM implementation doesn't has limit of # of parallel reclaim.
> when heavy workload, it bring to 2 bad things
> - heavy lock contention
> - unnecessary swap out
>
> abount 2 month ago, KAMEZA Hiroyuki proposed the patch of page
> reclaim throttle and explain it improve reclaim time.
> http://marc.info/?l=linux-mm&m=119667465917215&w=2
>
> but unfortunately it works only memcgroup reclaim.
> Today, I implement it again for support global reclaim and mesure it.
>
Hi, Kosaki,
It's good to keep the main reclaim code and the memory controller reclaim in
sync, so this is a nice effort.
> @@ -1456,7 +1501,7 @@ unsigned long try_to_free_mem_cgroup_pag
> int target_zone = gfp_zone(GFP_HIGHUSER_MOVABLE);
>
> zones = NODE_DATA(numa_node_id())->node_zonelists[target_zone].zones;
> - if (do_try_to_free_pages(zones, sc.gfp_mask, &sc))
> + if (try_to_free_pages_throttled(zones, 0, sc.gfp_mask, &sc))
> return 1;
> return 0;
> }
>
try_to_free_pages_throttled checks for zone_watermark_ok(), that will not work
in the case that we are reclaiming from a cgroup which over it's limit. We need
a different check, to see if the mem_cgroup is still over it's limit or not.
--
Warm Regards,
Balbir Singh
Linux Technology Center
IBM, ISTL
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists