[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20080630112608.94fe4762.kamezawa.hiroyu@jp.fujitsu.com>
Date: Mon, 30 Jun 2008 11:26:08 +0900
From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
To: Hugh Dickins <hugh@...itas.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Balbir Singh <balbir@...ibm.com>,
Li Zefan <lizf@...fujitsu.com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] memcg: further checking of disabled flag
On Sun, 29 Jun 2008 01:17:17 +0100 (BST)
Hugh Dickins <hugh@...itas.com> wrote:
> Further adjustments to the mem_cgroup_subsys.disabled tests: add one to
> mem_cgroup_shrink_usage; move mem_cgroup_charge_common's into its callers,
> before they've done any work; and add one to mem_cgroup_move_lists, to
> avoid the overhead of its bit spin locking and unlocking.
>
> Signed-off-by: Hugh Dickins <hugh@...itas.com>
seems better
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
> ---
> Should follow mmotm's memcg-clean-up-checking-of-the-disabled-flag.patch
>
> mm/memcontrol.c | 15 ++++++++++++---
> 1 file changed, 12 insertions(+), 3 deletions(-)
>
> --- mmotm/mm/memcontrol.c 2008-06-27 13:39:20.000000000 +0100
> +++ linux/mm/memcontrol.c 2008-06-27 17:32:29.000000000 +0100
> @@ -354,6 +354,9 @@ void mem_cgroup_move_lists(struct page *
> struct mem_cgroup_per_zone *mz;
> unsigned long flags;
>
> + if (mem_cgroup_subsys.disabled)
> + return;
> +
> /*
> * We cannot lock_page_cgroup while holding zone's lru_lock,
> * because other holders of lock_page_cgroup can be interrupted
> @@ -533,9 +536,6 @@ static int mem_cgroup_charge_common(stru
> unsigned long nr_retries = MEM_CGROUP_RECLAIM_RETRIES;
> struct mem_cgroup_per_zone *mz;
>
> - if (mem_cgroup_subsys.disabled)
> - return 0;
> -
> pc = kmem_cache_alloc(page_cgroup_cache, gfp_mask);
> if (unlikely(pc == NULL))
> goto err;
> @@ -620,6 +620,9 @@ err:
>
> int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask)
> {
> + if (mem_cgroup_subsys.disabled)
> + return 0;
> +
> /*
> * If already mapped, we don't have to account.
> * If page cache, page->mapping has address_space.
> @@ -638,6 +641,9 @@ int mem_cgroup_charge(struct page *page,
> int mem_cgroup_cache_charge(struct page *page, struct mm_struct *mm,
> gfp_t gfp_mask)
> {
> + if (mem_cgroup_subsys.disabled)
> + return 0;
> +
> /*
> * Corner case handling. This is called from add_to_page_cache()
> * in usual. But some FS (shmem) precharges this page before calling it
> @@ -789,6 +795,9 @@ int mem_cgroup_shrink_usage(struct mm_st
> int progress = 0;
> int retry = MEM_CGROUP_RECLAIM_RETRIES;
>
> + if (mem_cgroup_subsys.disabled)
> + return 0;
> +
> rcu_read_lock();
> mem = mem_cgroup_from_task(rcu_dereference(mm->owner));
> css_get(&mem->css);
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists