[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1477787923-61185-27-git-send-email-davidcc@google.com>
Date: Sat, 29 Oct 2016 17:38:23 -0700
From: David Carrillo-Cisneros <davidcc@...gle.com>
To: linux-kernel@...r.kernel.org
Cc: "x86@...nel.org" <x86@...nel.org>, Ingo Molnar <mingo@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
Andi Kleen <ak@...ux.intel.com>,
Kan Liang <kan.liang@...el.com>,
Peter Zijlstra <peterz@...radead.org>,
Vegard Nossum <vegard.nossum@...il.com>,
Marcelo Tosatti <mtosatti@...hat.com>,
Nilay Vaish <nilayvaish@...il.com>,
Borislav Petkov <bp@...e.de>,
Vikas Shivappa <vikas.shivappa@...ux.intel.com>,
Ravi V Shankar <ravi.v.shankar@...el.com>,
Fenghua Yu <fenghua.yu@...el.com>,
Paul Turner <pjt@...gle.com>,
Stephane Eranian <eranian@...gle.com>,
David Carrillo-Cisneros <davidcc@...gle.com>
Subject: [PATCH v3 26/46] sched: introduce the finish_arch_pre_lock_switch() scheduler hook
This hook allows architecture specific code to be called right after
perf_events' context switch but before the scheduler lock is released.
It will serve two uses in this patch series:
1) Calls CMT's cgroup context switch code that update the current RMID
when no perf event is active (in continuous monitoring mode).
2) Calls __pqr_ctx_switch to perform the write with the final value to
the slow PQR_ASSOC msr.
This hook is different than the one used by Intel CAT in the series
currently under review in LKML. CAT series simply adds a call to
intel_rdt_sched_in in __switch_to (see
"[PATCH v6 09/10] x86/intel_rdt: Add scheduler hook").
This series proposes a change to use finish_arch_pre_lock_switch instead.
Since, for CMT, the integration with perf_events requires the context
switch of the intel rdt common code to occur after perf's context switch
and before releasing the switch lock, in order to perform (1) correctly.
Signed-off-by: David Carrillo-Cisneros <davidcc@...gle.com>
---
kernel/sched/core.c | 1 +
kernel/sched/sched.h | 3 +++
2 files changed, 4 insertions(+)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 94732d1..2138ee6 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2766,6 +2766,7 @@ static struct rq *finish_task_switch(struct task_struct *prev)
prev_state = prev->state;
vtime_task_switch(prev);
perf_event_task_sched_in(prev, current);
+ finish_arch_pre_lock_switch();
finish_lock_switch(rq, prev);
finish_arch_post_lock_switch();
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 055f935..0a0208e 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1112,6 +1112,9 @@ static inline int task_on_rq_migrating(struct task_struct *p)
#ifndef prepare_arch_switch
# define prepare_arch_switch(next) do { } while (0)
#endif
+#ifndef finish_arch_pre_lock_switch
+# define finish_arch_pre_lock_switch() do { } while (0)
+#endif
#ifndef finish_arch_post_lock_switch
# define finish_arch_post_lock_switch() do { } while (0)
#endif
--
2.8.0.rc3.226.g39d4020
Powered by blists - more mailing lists