[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <b040c32a0812152306p6b72e1d6r548daa66f27f24d6@mail.gmail.com>
Date: Mon, 15 Dec 2008 23:06:41 -0800
From: Ken Chen <kenchen@...gle.com>
To: Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: [patch 2/2] cpuacct: export percpu cpuacct cgroup stats
sorry, forgot to cc lkml on initial post.
---------- Forwarded message ----------
From: Ken Chen <kenchen@...gle.com>
Date: Mon, Dec 15, 2008 at 10:04 PM
Subject: [patch 2/2] cpuacct: export percpu cpuacct cgroup stats
To: Ingo Molnar <mingo@...e.hu>, Andrew Morton
<akpm@...ux-foundation.org>, Paul Menage <menage@...gle.com>, Li Zefan
<lizf@...fujitsu.com>
This patch export per-cpu CPU cycle usage for a given cpuacct cgroup.
There is a need for a user space monitor daemon to track group CPU
usage on per-cpu base. It is also useful for monitoring CFS load
balancer behavior by tracking per CPU group usage.
Signed-off-by: Ken Chen <kenchen@...gle.com>
Reviewed-by: Li Zefan <lizf@...fujitsu.com>
Reviewed-by: Andrew Morton <akpm@...ux-foundation.org>
diff --git a/kernel/sched.c b/kernel/sched.c
index 124bd7a..cfdaace 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -9369,12 +9369,32 @@ out:
return err;
}
+static int cpuacct_percpu_seq_read(struct cgroup *cgroup, struct cftype *cft,
+ struct seq_file *m)
+{
+ struct cpuacct *ca = cgroup_ca(cgroup);
+ u64 percpu;
+ int i;
+
+ for_each_present_cpu(i) {
+ percpu = cpuacct_cpuusage_read(ca, i);
+ seq_printf(m, "%llu ", (unsigned long long) percpu);
+ }
+ seq_printf(m, "\n");
+ return 0;
+}
+
static struct cftype files[] = {
{
.name = "usage",
.read_u64 = cpuusage_read,
.write_u64 = cpuusage_write,
},
+ {
+ .name = "usage_percpu",
+ .read_seq_string = cpuacct_percpu_seq_read,
+ },
+
};
static int cpuacct_populate(struct cgroup_subsys *ss, struct cgroup *cgrp)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists