[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20090701.152115.706994265076015808.mitake@dcl.info.waseda.ac.jp>
Date: Wed, 01 Jul 2009 15:21:15 +0900 (JST)
From: Hitoshi Mitake <mitake@....info.waseda.ac.jp>
To: Ingo Molnar <mingo@...e.hu>
Cc: linux-kernel@...r.kernel.org
Subject: [PATCH][RFC] Adding information of counts processes acquired how
many spinlocks to schedstat
Hi,
I wrote a test patch which add information of counts processes acquired how many spinlocks to schedstat.
After applied this patch, /proc/<PID>/sched will change like this,
init (1, #threads: 1)
---------------------------------------------------------
se.exec_start : 482130.851458
se.vruntime : 26883.107980
se.sum_exec_runtime : 2316.651816
se.avg_overlap : 0.480053
se.avg_wakeup : 14.999993
....
se.nr_wakeups_passive : 1
se.nr_wakeups_idle : 0
se.nr_acquired_spinlock : 74483
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
avg_atom : 2.181404
avg_per_cpu : 772.217272
nr_switches : 1062
...
The line underlined with ^^^ is new one.
This means init process acquired spinlock 74483 times.
Today, spinlock is an importatnt factor for scalability.
This information must be useful for people working on multicore.
If you think this is useful, I would like to add more information related to spinlocks,
like average waiting time(or cycle count), max waiting time, etc...
But this patch has a point to consider, the line
current->se.nr_acquired_spinlock++;
This breaks convention that incrementing member of sched_entity related to SCHEDSTAT
with schedstat_inc.
I couldn't write the point with schedstat_inc because of the structure of sched_stats.h.
How do you think about this?
Signed-off-by: Hitoshi Mitake <mitake@....info.waseda.ac.jp>
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 0085d75..f63b11f 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1127,6 +1127,8 @@ struct sched_entity {
u64 nr_wakeups_affine_attempts;
u64 nr_wakeups_passive;
u64 nr_wakeups_idle;
+
+ u64 nr_acquired_spinlock;
#endif
#ifdef CONFIG_FAIR_GROUP_SCHED
diff --git a/kernel/sched_debug.c b/kernel/sched_debug.c
index 70c7e0b..792b0f7 100644
--- a/kernel/sched_debug.c
+++ b/kernel/sched_debug.c
@@ -426,6 +426,7 @@ void proc_sched_show_task(struct task_struct *p, struct seq_file *m)
P(se.nr_wakeups_affine_attempts);
P(se.nr_wakeups_passive);
P(se.nr_wakeups_idle);
+ P(se.nr_acquired_spinlock);
{
u64 avg_atom, avg_per_cpu;
@@ -500,6 +501,7 @@ void proc_sched_set_task(struct task_struct *p)
p->se.nr_wakeups_affine_attempts = 0;
p->se.nr_wakeups_passive = 0;
p->se.nr_wakeups_idle = 0;
+ p->se.nr_acquired_spinlock = 0;
p->sched_info.bkl_count = 0;
#endif
p->se.sum_exec_runtime = 0;
diff --git a/kernel/spinlock.c b/kernel/spinlock.c
index 7932653..92c1ed6 100644
--- a/kernel/spinlock.c
+++ b/kernel/spinlock.c
@@ -181,6 +181,10 @@ void __lockfunc _spin_lock(spinlock_t *lock)
preempt_disable();
spin_acquire(&lock->dep_map, 0, 0, _RET_IP_);
LOCK_CONTENDED(lock, _raw_spin_trylock, _raw_spin_lock);
+
+#ifdef CONFIG_SCHEDSTATS
+ current->se.nr_acquired_spinlock++;
+#endif
}
EXPORT_SYMBOL(_spin_lock);
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists