[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090323063603.GA15360@in.ibm.com>
Date: Mon, 23 Mar 2009 12:06:03 +0530
From: Bharata B Rao <bharata@...ux.vnet.ibm.com>
To: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
Cc: linux-kernel@...r.kernel.org, Balaji Rao <balajirrao@...il.com>,
Dhaval Giani <dhaval@...ux.vnet.ibm.com>,
Balbir Singh <balbir@...ux.vnet.ibm.com>,
Li Zefan <lizf@...fujitsu.com>,
Paul Menage <menage@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Ingo Molnar <mingo@...e.hu>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>
Subject: Re: [PATCH -tip] cpuacct: per-cgroup utime/stime statistics - v4
On Mon, Mar 23, 2009 at 01:55:28PM +0900, KAMEZAWA Hiroyuki wrote:
> On Mon, 23 Mar 2009 10:05:38 +0530
> Bharata B Rao <bharata@...ux.vnet.ibm.com> wrote:
>
> > Hi Ingo,
> >
> > Here is the v4 of the cpuacct statistics patch to obtain per-cgroup
> > system and user times with appropriate tags and complete changelog.
> > This applies against the latest -tip plus the cpuacct hierarchy fix v2
> > which I posted just now. Could you please include this in -tip ?
> >
> > Changelog:
> >
> > v4
> > - Remove comments in cpuacct_update_stats() which explained why rcu_read_lock()
> > was needed (as per Peter Zijlstra's review comments).
> > - Don't say that percpu_counter_read() is broken in Documentation/cpuacct.txt
> > as per KAMEZAWA Hiroyuki's review comments.
> >
> Broken -> isn't safe ? no difference ;)
>From your comment last time, I thought you meant calling it broken would
be harsh.
> Can't this help you ?
>
>
> Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
> ---
> Index: mmotm-2.6.29-Mar21/include/linux/percpu_counter.h
> ===================================================================
> --- mmotm-2.6.29-Mar21.orig/include/linux/percpu_counter.h
> +++ mmotm-2.6.29-Mar21/include/linux/percpu_counter.h
> @@ -62,6 +62,16 @@ static inline s64 percpu_counter_read(st
> return fbc->count;
> }
>
> +static inline s64 percpu_counter_read_safe(struct percpu_counter *fbc)
> +{
> + s64 ret;
> +
> + spin_lock(&fbc->lock);
> + ret = fbc->count;
> + spin_unlock(&fbc->lock);
> + return ret;
> +}
> +
> /*
> * It is possible for the percpu_counter_read() to return a small negative
> * number for some counter which should never be negative.
> @@ -114,6 +124,11 @@ static inline s64 percpu_counter_read(st
> return fbc->count;
> }
>
> +static inline s64 percpu_counter_read_safe(struct percpu_counter *fbc)
> +{
> + return fbc->count;
> +}
> +
> static inline s64 percpu_counter_read_positive(struct percpu_counter *fbc)
> {
> return fbc->count;
Yes it helps, but for the records, let me note that it makes the readside
~20% slower.
[I measured time spent by cpuacct_stats_show(), which is part of
the read routine for cpuacct.stat by using kretprobe and for 100 reads
(or 100 executions of cpuacct_stats_show()), I get the following numbers:
percpu_counter_read() - 3166367 ns
percpu_counter_read_safe() - 3793546 ns which is ~20% slower. ]
I would prefer that this patch should be included in its current form
and we could separately fix percpu_counter_read() given that there
has been an attempt in the past to fix this (http://lkml.org/lkml/2008/8/27/303)
Moreover, I don't know how much acceptable it is to work around the problem
in percpu_counter_read() by introducing another variant
percpu_counter_read_safe().
Regards,
Bharata.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists