[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190604111459.2862-5-qais.yousef@arm.com>
Date: Tue, 4 Jun 2019 12:14:57 +0100
From: Qais Yousef <qais.yousef@....com>
To: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Steven Rostedt <rostedt@...dmis.org>
Cc: linux-kernel@...r.kernel.org,
Pavankumar Kondeti <pkondeti@...eaurora.org>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Uwe Kleine-Konig <u.kleine-koenig@...gutronix.de>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Quentin Perret <quentin.perret@....com>,
Qais Yousef <qais.yousef@....com>
Subject: [PATCH v3 4/6] sched: Add new tracepoint to track pelt at se level
The new tracepoint allows tracking PELT signals at sched_entity level.
Which is supported in CFS tasks and taskgroups only.
Signed-off-by: Qais Yousef <qais.yousef@....com>
---
include/trace/events/sched.h | 4 ++++
kernel/sched/fair.c | 1 +
kernel/sched/pelt.c | 2 ++
3 files changed, 7 insertions(+)
diff --git a/include/trace/events/sched.h b/include/trace/events/sched.h
index 520b89d384ec..c7dd9bc7f001 100644
--- a/include/trace/events/sched.h
+++ b/include/trace/events/sched.h
@@ -617,6 +617,10 @@ DECLARE_TRACE(pelt_irq_tp,
TP_PROTO(struct rq *rq),
TP_ARGS(rq));
+DECLARE_TRACE(pelt_se_tp,
+ TP_PROTO(struct sched_entity *se),
+ TP_ARGS(se));
+
#endif /* _TRACE_SCHED_H */
/* This part must be outside protection */
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index dee1338ec4a9..8e0015ebf109 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3354,6 +3354,7 @@ static inline int propagate_entity_load_avg(struct sched_entity *se)
update_tg_cfs_runnable(cfs_rq, se, gcfs_rq);
trace_pelt_cfs_tp(cfs_rq);
+ trace_pelt_se_tp(se);
return 1;
}
diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c
index c9d4945861a4..7f1a1f641866 100644
--- a/kernel/sched/pelt.c
+++ b/kernel/sched/pelt.c
@@ -267,6 +267,7 @@ int __update_load_avg_blocked_se(u64 now, struct sched_entity *se)
{
if (___update_load_sum(now, &se->avg, 0, 0, 0)) {
___update_load_avg(&se->avg, se_weight(se), se_runnable(se));
+ trace_pelt_se_tp(se);
return 1;
}
@@ -280,6 +281,7 @@ int __update_load_avg_se(u64 now, struct cfs_rq *cfs_rq, struct sched_entity *se
___update_load_avg(&se->avg, se_weight(se), se_runnable(se));
cfs_se_util_change(&se->avg);
+ trace_pelt_se_tp(se);
return 1;
}
--
2.17.1
Powered by blists - more mailing lists