[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALWz4iyiM-CFgVaHiE1Lgd1ZwJzHwY3tx9XX6HeDPUV_wVPAtQ@mail.gmail.com>
Date: Fri, 27 Apr 2012 14:28:31 -0700
From: Ying Han <yinghan@...gle.com>
To: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
Cc: Linux Kernel <linux-kernel@...r.kernel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"cgroups@...r.kernel.org" <cgroups@...r.kernel.org>,
Michal Hocko <mhocko@...e.cz>,
Johannes Weiner <hannes@...xchg.org>,
Frederic Weisbecker <fweisbec@...il.com>,
Glauber Costa <glommer@...allels.com>,
Tejun Heo <tj@...nel.org>,
"Aneesh Kumar K.V" <aneesh.kumar@...ux.vnet.ibm.com>,
Andrew Morton <akpm@...ux-foundation.org>,
kamezawa.hiroyuki@...il.com
Subject: Re: [RFC][PATCH 9/9 v2] memcg: never return error at pre_destroy()
On Thu, Apr 26, 2012 at 11:06 PM, KAMEZAWA Hiroyuki
<kamezawa.hiroyu@...fujitsu.com> wrote:
> When force_empty() called by ->pre_destroy(), no memory reclaim happens
> and it doesn't take very long time which requires signal_pending() check.
> And if we return -EINTR from pre_destroy(), cgroup.c show warning.
>
> This patch removes signal check in force_empty(). By this, ->pre_destroy()
> returns success always.
>
> Note: check for 'cgroup is empty' remains for force_empty interface.
>
> Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
> ---
> mm/hugetlb.c | 10 +---------
> mm/memcontrol.c | 14 +++++---------
> 2 files changed, 6 insertions(+), 18 deletions(-)
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 4dd6b39..770f1642 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -1922,20 +1922,12 @@ int hugetlb_force_memcg_empty(struct cgroup *cgroup)
> int ret = 0, idx = 0;
>
> do {
> + /* see memcontrol.c::mem_cgroup_force_empty() */
> if (cgroup_task_count(cgroup)
> || !list_empty(&cgroup->children)) {
> ret = -EBUSY;
> goto out;
> }
> - /*
> - * If the task doing the cgroup_rmdir got a signal
> - * we don't really need to loop till the hugetlb resource
> - * usage become zero.
> - */
> - if (signal_pending(current)) {
> - ret = -EINTR;
> - goto out;
> - }
> for_each_hstate(h) {
> spin_lock(&hugetlb_lock);
> list_for_each_entry(page, &h->hugepage_activelist, lru) {
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 2715223..ee350c5 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -3852,8 +3852,6 @@ static int mem_cgroup_force_empty_list(struct mem_cgroup *memcg,
> pc = lookup_page_cgroup(page);
>
> ret = mem_cgroup_move_parent(page, pc, memcg, GFP_KERNEL);
> - if (ret == -ENOMEM || ret == -EINTR)
> - break;
>
> if (ret == -EBUSY || ret == -EINVAL) {
> /* found lock contention or "pc" is obsolete. */
> @@ -3863,7 +3861,7 @@ static int mem_cgroup_force_empty_list(struct mem_cgroup *memcg,
> busy = NULL;
> }
>
> - if (!ret && !list_empty(list))
> + if (!loop)
This looks a bit strange to me... why we make the change ?
--Ying
> return -EBUSY;
> return ret;
> }
> @@ -3893,11 +3891,12 @@ static int mem_cgroup_force_empty(struct mem_cgroup *memcg, bool free_all)
> move_account:
> do {
> ret = -EBUSY;
> + /*
> + * This never happens when this is called by ->pre_destroy().
> + * But we need to take care of force_empty interface.
> + */
> if (cgroup_task_count(cgrp) || !list_empty(&cgrp->children))
> goto out;
> - ret = -EINTR;
> - if (signal_pending(current))
> - goto out;
> /* This is for making all *used* pages to be on LRU. */
> lru_add_drain_all();
> drain_all_stock_sync(memcg);
> @@ -3918,9 +3917,6 @@ move_account:
> }
> mem_cgroup_end_move(memcg);
> memcg_oom_recover(memcg);
> - /* it seems parent cgroup doesn't have enough mem */
> - if (ret == -ENOMEM)
> - goto try_to_free;
> cond_resched();
> /* "ret" should also be checked to ensure all lists are empty. */
> } while (res_counter_read_u64(&memcg->res, RES_USAGE) > 0 || ret);
> --
> 1.7.4.1
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists