[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20120228144507.acd70d1e.akpm@linux-foundation.org>
Date: Tue, 28 Feb 2012 14:45:07 -0800
From: Andrew Morton <akpm@...ux-foundation.org>
To: Fengguang Wu <fengguang.wu@...el.com>
Cc: Greg Thelen <gthelen@...gle.com>, Jan Kara <jack@...e.cz>,
Ying Han <yinghan@...gle.com>,
"hannes@...xchg.org" <hannes@...xchg.org>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
Rik van Riel <riel@...hat.com>,
Linux Memory Management List <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 4/9] memcg: dirty page accounting support routines
On Tue, 28 Feb 2012 22:00:26 +0800
Fengguang Wu <fengguang.wu@...el.com> wrote:
> From: Greg Thelen <gthelen@...gle.com>
>
> Added memcg dirty page accounting support routines. These routines are
> used by later changes to provide memcg aware writeback and dirty page
> limiting. A mem_cgroup_dirty_info() tracepoint is is also included to
> allow for easier understanding of memcg writeback operation.
>
> ...
>
> +/*
> + * Return the number of additional pages that the @memcg cgroup could allocate.
> + * If use_hierarchy is set, then this involves checking parent mem cgroups to
> + * find the cgroup with the smallest free space.
> + */
Comment needs revisting - use_hierarchy does not exist.
> +static unsigned long
> +mem_cgroup_hierarchical_free_pages(struct mem_cgroup *memcg)
> +{
> + u64 free;
> + unsigned long min_free;
> +
> + min_free = global_page_state(NR_FREE_PAGES);
> +
> + while (memcg) {
> + free = mem_cgroup_margin(memcg);
> + min_free = min_t(u64, min_free, free);
> + memcg = parent_mem_cgroup(memcg);
> + }
> +
> + return min_free;
> +}
> +
> +/*
> + * mem_cgroup_page_stat() - get memory cgroup file cache statistics
> + * @memcg: memory cgroup to query
> + * @item: memory statistic item exported to the kernel
> + *
> + * Return the accounted statistic value.
> + */
> +unsigned long mem_cgroup_page_stat(struct mem_cgroup *memcg,
> + enum mem_cgroup_page_stat_item item)
> +{
> + struct mem_cgroup *iter;
> + s64 value;
> +
> + /*
> + * If we're looking for dirtyable pages we need to evaluate free pages
> + * depending on the limit and usage of the parents first of all.
> + */
> + if (item == MEMCG_NR_DIRTYABLE_PAGES)
> + value = mem_cgroup_hierarchical_free_pages(memcg);
> + else
> + value = 0;
> +
> + /*
> + * Recursively evaluate page statistics against all cgroup under
> + * hierarchy tree
> + */
> + for_each_mem_cgroup_tree(iter, memcg)
> + value += mem_cgroup_local_page_stat(iter, item);
What's the locking rule for for_each_mem_cgroup_tree()? It's unobvious
from the code and isn't documented?
> + /*
> + * Summing of unlocked per-cpu counters is racy and may yield a slightly
> + * negative value. Zero is the only sensible value in such cases.
> + */
> + if (unlikely(value < 0))
> + value = 0;
> +
> + return value;
> +}
> +
>
> ...
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists