lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1463007752-116802-25-git-send-email-davidcc@google.com>
Date:	Wed, 11 May 2016 16:02:24 -0700
From:	David Carrillo-Cisneros <davidcc@...gle.com>
To:	Peter Zijlstra <peterz@...radead.org>,
	Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
	Arnaldo Carvalho de Melo <acme@...nel.org>,
	Ingo Molnar <mingo@...hat.com>
Cc:	Vikas Shivappa <vikas.shivappa@...ux.intel.com>,
	Matt Fleming <matt@...eblueprint.co.uk>,
	Tony Luck <tony.luck@...el.com>,
	Stephane Eranian <eranian@...gle.com>,
	Paul Turner <pjt@...gle.com>,
	David Carrillo-Cisneros <davidcc@...gle.com>, x86@...nel.org,
	linux-kernel@...r.kernel.org
Subject: [PATCH v2 24/32] sched: introduce the finish_arch_pre_lock_switch() scheduler hook

This hook allows architecture specific code to be called right after
perf_events' context switch but before the scheduler lock is released.

It serves two uses in this patch series (introduced in next two patches):
  1) Calls CQM's cgroup context switch code that update the current RMID
  when no perf event is active (in continuous monitoring mode).
  2) Calls __pqr_ctx_switch to perform final write to the slow
  PQR_ASSOC msr in PQR's software.

Both use cases start monitoring for the next task, a role analogous to
that of perf_event_task_sched_in.

In future series, use case (1) will be expanded by Intel's CAT update to
the next CLOSID. Since Intel's CAT is independent to perf events, the hook
to perform (1) is not suitable to be in perf, yet, must be called as close
to perf sched_in as possible.

Reviewed-by: Stephane Eranian <eranian@...gle.com>
Signed-off-by: David Carrillo-Cisneros <davidcc@...gle.com>
---
 arch/x86/include/asm/processor.h | 1 +
 kernel/sched/core.c              | 1 +
 kernel/sched/sched.h             | 3 +++
 3 files changed, 5 insertions(+)

diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
index 9264476..c85fd82 100644
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -22,6 +22,7 @@ struct vm86;
 #include <asm/nops.h>
 #include <asm/special_insns.h>
 #include <asm/fpu/types.h>
+#include <asm/pqr_common.h>
 
 #include <linux/personality.h>
 #include <linux/cache.h>
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index d1f7149..a1200c2 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2623,6 +2623,7 @@ static struct rq *finish_task_switch(struct task_struct *prev)
 	prev_state = prev->state;
 	vtime_task_switch(prev);
 	perf_event_task_sched_in(prev, current);
+	finish_arch_pre_lock_switch();
 	finish_lock_switch(rq, prev);
 	finish_arch_post_lock_switch();
 
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index ec2e8d2..cb48b5c 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1077,6 +1077,9 @@ static inline int task_on_rq_migrating(struct task_struct *p)
 #ifndef prepare_arch_switch
 # define prepare_arch_switch(next)	do { } while (0)
 #endif
+#ifndef finish_arch_pre_lock_switch
+# define finish_arch_pre_lock_switch()	do { } while (0)
+#endif
 #ifndef finish_arch_post_lock_switch
 # define finish_arch_post_lock_switch()	do { } while (0)
 #endif
-- 
2.8.0.rc3.226.g39d4020

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ