[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b040c32a0812051016q7912688dw2c663a77121a8f18@mail.gmail.com>
Date: Fri, 5 Dec 2008 10:16:30 -0800
From: Ken Chen <kenchen@...gle.com>
To: Ingo Molnar <mingo@...e.hu>
Cc: Li Zefan <lizf@...fujitsu.com>, Paul Menage <menage@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [patch] export percpu cpuacct cgroup stats
On Fri, Dec 5, 2008 at 5:52 AM, Ingo Molnar <mingo@...e.hu> wrote:
> Could someone please send the final patch with a final changelog, with
> all fixlets and tags in place please?
Here it is:
This patch export per-cpu CPU cycle usage for a given cpuacct cgroup.
There is a need for a user space monitor daemon to track group CPU
usage on per-cpu base. It is also useful for monitoring CFS load
balancer behavior by tracking per CPU group usage.
Signed-off-by: Ken Chen <kenchen@...gle.com>
Reviewed-by: Li Zefan <lizf@...fujitsu.com>
diff --git a/kernel/sched.c b/kernel/sched.c
index b7480fb..055c54f 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -9345,12 +9345,34 @@ out:
return err;
}
+static int cpuacct_percpu_seq_read(struct cgroup *cgroup, struct cftype *cft,
+ struct seq_file *m)
+{
+ struct cpuacct *ca = cgroup_ca(cgroup);
+ u64 percpu;
+ int i;
+
+ for_each_possible_cpu(i) {
+ spin_lock_irq(&cpu_rq(i)->lock);
+ percpu = *percpu_ptr(ca->cpuusage, i);
+ spin_unlock_irq(&cpu_rq(i)->lock);
+ seq_printf(m, "%llu ", (unsigned long long) percpu);
+ }
+ seq_printf(m, "\n");
+ return 0;
+}
+
static struct cftype files[] = {
{
.name = "usage",
.read_u64 = cpuusage_read,
.write_u64 = cpuusage_write,
},
+ {
+ .name = "usage_percpu",
+ .read_seq_string = cpuacct_percpu_seq_read,
+ },
+
};
static int cpuacct_populate(struct cgroup_subsys *ss, struct cgroup *cgrp)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists