[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <tip-ba19f51fcb549c7ee6261da243eea55a47e98d78@git.kernel.org>
Date: Tue, 25 Jun 2019 01:26:26 -0700
From: tip-bot for Qais Yousef <tipbot@...or.com>
To: linux-tip-commits@...r.kernel.org
Cc: linux-kernel@...r.kernel.org, pkondeti@...eaurora.org,
quentin.perret@....com, bigeasy@...utronix.de,
u.kleine-koenig@...gutronix.de, mingo@...nel.org,
dietmar.eggemann@....com, peterz@...radead.org, tglx@...utronix.de,
hpa@...or.com, rostedt@...dmis.org, qais.yousef@....com,
torvalds@...ux-foundation.org
Subject: [tip:sched/core] sched/debug: Add new tracepoints to track PELT at
rq level
Commit-ID: ba19f51fcb549c7ee6261da243eea55a47e98d78
Gitweb: https://git.kernel.org/tip/ba19f51fcb549c7ee6261da243eea55a47e98d78
Author: Qais Yousef <qais.yousef@....com>
AuthorDate: Tue, 4 Jun 2019 12:14:56 +0100
Committer: Ingo Molnar <mingo@...nel.org>
CommitDate: Mon, 24 Jun 2019 19:23:41 +0200
sched/debug: Add new tracepoints to track PELT at rq level
The new tracepoints allow tracking PELT signals at rq level for all
scheduling classes + irq.
Signed-off-by: Qais Yousef <qais.yousef@....com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Cc: Dietmar Eggemann <dietmar.eggemann@....com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Pavankumar Kondeti <pkondeti@...eaurora.org>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Quentin Perret <quentin.perret@....com>
Cc: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Cc: Steven Rostedt <rostedt@...dmis.org>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: Uwe Kleine-Konig <u.kleine-koenig@...gutronix.de>
Link: https://lkml.kernel.org/r/20190604111459.2862-4-qais.yousef@arm.com
Signed-off-by: Ingo Molnar <mingo@...nel.org>
---
include/trace/events/sched.h | 23 +++++++++++++++++++++++
kernel/sched/fair.c | 6 ++++++
kernel/sched/pelt.c | 9 ++++++++-
3 files changed, 37 insertions(+), 1 deletion(-)
diff --git a/include/trace/events/sched.h b/include/trace/events/sched.h
index c8c7c7efb487..520b89d384ec 100644
--- a/include/trace/events/sched.h
+++ b/include/trace/events/sched.h
@@ -594,6 +594,29 @@ TRACE_EVENT(sched_wake_idle_without_ipi,
TP_printk("cpu=%d", __entry->cpu)
);
+
+/*
+ * Following tracepoints are not exported in tracefs and provide hooking
+ * mechanisms only for testing and debugging purposes.
+ *
+ * Postfixed with _tp to make them easily identifiable in the code.
+ */
+DECLARE_TRACE(pelt_cfs_tp,
+ TP_PROTO(struct cfs_rq *cfs_rq),
+ TP_ARGS(cfs_rq));
+
+DECLARE_TRACE(pelt_rt_tp,
+ TP_PROTO(struct rq *rq),
+ TP_ARGS(rq));
+
+DECLARE_TRACE(pelt_dl_tp,
+ TP_PROTO(struct rq *rq),
+ TP_ARGS(rq));
+
+DECLARE_TRACE(pelt_irq_tp,
+ TP_PROTO(struct rq *rq),
+ TP_ARGS(rq));
+
#endif /* _TRACE_SCHED_H */
/* This part must be outside protection */
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 461c3e9a67b2..e883d7e17e36 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3347,6 +3347,8 @@ static inline int propagate_entity_load_avg(struct sched_entity *se)
update_tg_cfs_util(cfs_rq, se, gcfs_rq);
update_tg_cfs_runnable(cfs_rq, se, gcfs_rq);
+ trace_pelt_cfs_tp(cfs_rq);
+
return 1;
}
@@ -3499,6 +3501,8 @@ static void attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *s
add_tg_cfs_propagate(cfs_rq, se->avg.load_sum);
cfs_rq_util_change(cfs_rq, flags);
+
+ trace_pelt_cfs_tp(cfs_rq);
}
/**
@@ -3518,6 +3522,8 @@ static void detach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *s
add_tg_cfs_propagate(cfs_rq, -se->avg.load_sum);
cfs_rq_util_change(cfs_rq, 0);
+
+ trace_pelt_cfs_tp(cfs_rq);
}
/*
diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c
index 42ea66b07b1d..4e961b55b5ea 100644
--- a/kernel/sched/pelt.c
+++ b/kernel/sched/pelt.c
@@ -28,6 +28,8 @@
#include "sched.h"
#include "pelt.h"
+#include <trace/events/sched.h>
+
/*
* Approximate:
* val * y^n, where y^32 ~= 0.5 (~1 scheduling period)
@@ -292,6 +294,7 @@ int __update_load_avg_cfs_rq(u64 now, struct cfs_rq *cfs_rq)
cfs_rq->curr != NULL)) {
___update_load_avg(&cfs_rq->avg, 1, 1);
+ trace_pelt_cfs_tp(cfs_rq);
return 1;
}
@@ -317,6 +320,7 @@ int update_rt_rq_load_avg(u64 now, struct rq *rq, int running)
running)) {
___update_load_avg(&rq->avg_rt, 1, 1);
+ trace_pelt_rt_tp(rq);
return 1;
}
@@ -340,6 +344,7 @@ int update_dl_rq_load_avg(u64 now, struct rq *rq, int running)
running)) {
___update_load_avg(&rq->avg_dl, 1, 1);
+ trace_pelt_dl_tp(rq);
return 1;
}
@@ -388,8 +393,10 @@ int update_irq_load_avg(struct rq *rq, u64 running)
1,
1);
- if (ret)
+ if (ret) {
___update_load_avg(&rq->avg_irq, 1, 1);
+ trace_pelt_irq_tp(rq);
+ }
return ret;
}
Powered by blists - more mailing lists