[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200911215118.2887710-2-robh@kernel.org>
Date: Fri, 11 Sep 2020 15:51:09 -0600
From: Rob Herring <robh@...nel.org>
To: Will Deacon <will@...nel.org>,
Catalin Marinas <catalin.marinas@....com>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
Jiri Olsa <jolsa@...hat.com>
Cc: linux-kernel@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Namhyung Kim <namhyung@...nel.org>,
Raphael Gault <raphael.gault@....com>,
Mark Rutland <mark.rutland@....com>,
Jonathan Cameron <Jonathan.Cameron@...wei.com>,
Ian Rogers <irogers@...gle.com>, honnappa.nagarahalli@....com
Subject: [PATCH v3 01/10] arm64: pmu: Add hook to handle pmu-related undefined instructions
From: Raphael Gault <raphael.gault@....com>
This patch introduces a protection for the userspace processes which are
trying to access the registers from the pmu registers on a big.LITTLE
environment. It introduces a hook to handle undefined instructions.
The goal here is to prevent the process to be interrupted by a signal
when the error is caused by the task being scheduled while accessing
a counter, causing the counter access to be invalid. As we are not able
to know efficiently the number of counters available physically on both
pmu in that context we consider that any faulting access to a counter
which is architecturally correct should not cause a SIGILL signal if
the permissions are set accordingly.
This commit also modifies the mask of the mrs_hook declared in
arch/arm64/kernel/cpufeatures.c which emulates only feature register
access. This is necessary because this hook's mask was too large and
thus masking any mrs instruction, even if not related to the emulated
registers which made the pmu emulation inefficient.
Signed-off-by: Raphael Gault <raphael.gault@....com>
Signed-off-by: Rob Herring <robh@...nel.org>
---
v2:
- Fix warning for set but unused sys_reg
---
arch/arm64/kernel/cpufeature.c | 4 +--
arch/arm64/kernel/perf_event.c | 54 ++++++++++++++++++++++++++++++++++
2 files changed, 56 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index a389b999482e..00bf53ffd9b0 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -2811,8 +2811,8 @@ static int emulate_mrs(struct pt_regs *regs, u32 insn)
}
static struct undef_hook mrs_hook = {
- .instr_mask = 0xfff00000,
- .instr_val = 0xd5300000,
+ .instr_mask = 0xffff0000,
+ .instr_val = 0xd5380000,
.pstate_mask = PSR_AA32_MODE_MASK,
.pstate_val = PSR_MODE_EL0t,
.fn = emulate_mrs,
diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
index 462f9a9cc44b..70538ae684da 100644
--- a/arch/arm64/kernel/perf_event.c
+++ b/arch/arm64/kernel/perf_event.c
@@ -8,9 +8,11 @@
* This code is based heavily on the ARMv7 perf event code.
*/
+#include <asm/cpu.h>
#include <asm/irq_regs.h>
#include <asm/perf_event.h>
#include <asm/sysreg.h>
+#include <asm/traps.h>
#include <asm/virt.h>
#include <clocksource/arm_arch_timer.h>
@@ -1016,6 +1018,58 @@ static int armv8pmu_probe_pmu(struct arm_pmu *cpu_pmu)
return probe.present ? 0 : -ENODEV;
}
+static int emulate_pmu(struct pt_regs *regs, u32 insn)
+{
+ u32 rt;
+ u32 pmuserenr;
+
+ rt = aarch64_insn_decode_register(AARCH64_INSN_REGTYPE_RT, insn);
+ pmuserenr = read_sysreg(pmuserenr_el0);
+
+ if ((pmuserenr & (ARMV8_PMU_USERENR_ER|ARMV8_PMU_USERENR_CR)) !=
+ (ARMV8_PMU_USERENR_ER|ARMV8_PMU_USERENR_CR))
+ return -EINVAL;
+
+
+ /*
+ * Userspace is expected to only use this in the context of the scheme
+ * described in the struct perf_event_mmap_page comments.
+ *
+ * Given that context, we can only get here if we got migrated between
+ * getting the register index and doing the MSR read. This in turn
+ * implies we'll fail the sequence and retry, so any value returned is
+ * 'good', all we need is to be non-fatal.
+ *
+ * The choice of the value 0 is comming from the fact that when
+ * accessing a register which is not counting events but is accessible,
+ * we get 0.
+ */
+ pt_regs_write_reg(regs, rt, 0);
+
+ arm64_skip_faulting_instruction(regs, 4);
+ return 0;
+}
+
+/*
+ * This hook will only be triggered by mrs
+ * instructions on PMU registers. This is mandatory
+ * in order to have a consistent behaviour even on
+ * big.LITTLE systems.
+ */
+static struct undef_hook pmu_hook = {
+ .instr_mask = 0xffff8800,
+ .instr_val = 0xd53b8800,
+ .fn = emulate_pmu,
+};
+
+static int __init enable_pmu_emulation(void)
+{
+ register_undef_hook(&pmu_hook);
+ return 0;
+}
+
+core_initcall(enable_pmu_emulation);
+
static int armv8_pmu_init(struct arm_pmu *cpu_pmu, char *name,
int (*map_event)(struct perf_event *event),
const struct attribute_group *events,
--
2.25.1
Powered by blists - more mailing lists