[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Tue, 11 Nov 2008 10:17:27 +0530
From: Balbir Singh <balbir@...ux.vnet.ibm.com>
To: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
CC: linux-mm@...ck.org, YAMAMOTO Takashi <yamamoto@...inux.co.jp>,
Paul Menage <menage@...gle.com>, lizf@...fujitsu.com,
linux-kernel@...r.kernel.org,
Nick Piggin <nickpiggin@...oo.com.au>,
David Rientjes <rientjes@...gle.com>,
Pavel Emelianov <xemul@...nvz.org>,
Dhaval Giani <dhaval@...ux.vnet.ibm.com>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [RFC][mm] [PATCH 3/4] Memory cgroup hierarchical reclaim (v2)
KAMEZAWA Hiroyuki wrote:
> On Sat, 08 Nov 2008 14:41:00 +0530
> Balbir Singh <balbir@...ux.vnet.ibm.com> wrote:
>
>> This patch introduces hierarchical reclaim. When an ancestor goes over its
>> limit, the charging routine points to the parent that is above its limit.
>> The reclaim process then starts from the last scanned child of the ancestor
>> and reclaims until the ancestor goes below its limit.
>>
>> Signed-off-by: Balbir Singh <balbir@...ux.vnet.ibm.com>
>> ---
>>
>> mm/memcontrol.c | 152 +++++++++++++++++++++++++++++++++++++++++++++++---------
>> 1 file changed, 128 insertions(+), 24 deletions(-)
>>
>> diff -puN mm/memcontrol.c~memcg-hierarchical-reclaim mm/memcontrol.c
>> --- linux-2.6.28-rc2/mm/memcontrol.c~memcg-hierarchical-reclaim 2008-11-08 14:09:32.000000000 +0530
>> +++ linux-2.6.28-rc2-balbir/mm/memcontrol.c 2008-11-08 14:09:32.000000000 +0530
>> @@ -132,6 +132,11 @@ struct mem_cgroup {
>> * statistics.
>> */
>> struct mem_cgroup_stat stat;
>> + /*
>> + * While reclaiming in a hiearchy, we cache the last child we
>> + * reclaimed from.
>> + */
>> + struct mem_cgroup *last_scanned_child;
>> };
>> static struct mem_cgroup init_mem_cgroup;
>>
>> @@ -467,6 +472,124 @@ unsigned long mem_cgroup_isolate_pages(u
>> return nr_taken;
>> }
>>
>> +static struct mem_cgroup *
>> +mem_cgroup_from_res_counter(struct res_counter *counter)
>> +{
>> + return container_of(counter, struct mem_cgroup, res);
>> +}
>> +
>> +/*
>> + * Dance down the hierarchy if needed to reclaim memory. We remember the
>> + * last child we reclaimed from, so that we don't end up penalizing
>> + * one child extensively based on its position in the children list.
>> + *
>> + * root_mem is the original ancestor that we've been reclaim from.
>> + */
>> +static int mem_cgroup_hierarchical_reclaim(struct mem_cgroup *mem,
>> + struct mem_cgroup *root_mem,
>> + gfp_t gfp_mask)
>> +{
>> + struct cgroup *cg_current, *cgroup;
>> + struct mem_cgroup *mem_child;
>> + int ret = 0;
>> +
>> + /*
>> + * Reclaim unconditionally and don't check for return value.
>> + * We need to reclaim in the current group and down the tree.
>> + * One might think about checking for children before reclaiming,
>> + * but there might be left over accounting, even after children
>> + * have left.
>> + */
>> + try_to_free_mem_cgroup_pages(mem, gfp_mask);
>> +
>> + if (res_counter_check_under_limit(&root_mem->res))
>> + return 0;
>> +
>> + if (list_empty(&mem->css.cgroup->children))
>> + return 0;
>> +
>> + /*
>> + * Scan all children under the mem_cgroup mem
>> + */
>> + if (!mem->last_scanned_child)
>> + cgroup = list_first_entry(&mem->css.cgroup->children,
>> + struct cgroup, sibling);
>> + else
>> + cgroup = mem->last_scanned_child->css.cgroup;
>> +
>
> Who guarantee this last_scan_child is accessible at this point ?
>
Good catch! I'll fix this in mem_cgroup_destroy. It'll need some locking around
it as well.
--
Balbir
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists