[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190125180711.1970973-8-jeremy.linton@arm.com>
Date: Fri, 25 Jan 2019 12:07:06 -0600
From: Jeremy Linton <jeremy.linton@....com>
To: linux-arm-kernel@...ts.infradead.org
Cc: catalin.marinas@....com, will.deacon@....com, marc.zyngier@....com,
suzuki.poulose@....com, dave.martin@....com,
shankerd@...eaurora.org, linux-kernel@...r.kernel.org,
ykaukab@...e.de, julien.thierry@....com, mlangsdo@...hat.com,
steven.price@....com, stefan.wahren@...e.com,
Jeremy Linton <jeremy.linton@....com>
Subject: [PATCH v4 07/12] arm64: add sysfs vulnerability show for meltdown
Display the mitigation status if active, otherwise
assume the cpu is safe unless it doesn't have CSV3
and isn't in our whitelist.
Signed-off-by: Jeremy Linton <jeremy.linton@....com>
---
arch/arm64/kernel/cpufeature.c | 33 +++++++++++++++++++++++++++------
1 file changed, 27 insertions(+), 6 deletions(-)
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index a9e18b9cdc1e..624dfe0b5cdd 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -944,6 +944,8 @@ has_useable_cnp(const struct arm64_cpu_capabilities *entry, int scope)
return has_cpuid_feature(entry, scope);
}
+/* default value is invalid until unmap_kernel_at_el0() runs */
+static bool __meltdown_safe = true;
static int __kpti_forced; /* 0: not forced, >0: forced on, <0: forced off */
static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
@@ -962,6 +964,16 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
{ /* sentinel */ }
};
char const *str = "command line option";
+ bool meltdown_safe;
+
+ meltdown_safe = is_midr_in_range_list(read_cpuid_id(), kpti_safe_list);
+
+ /* Defer to CPU feature registers */
+ if (has_cpuid_feature(entry, scope))
+ meltdown_safe = true;
+
+ if (!meltdown_safe)
+ __meltdown_safe = false;
/*
* For reasons that aren't entirely clear, enabling KPTI on Cavium
@@ -984,12 +996,7 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
if (IS_ENABLED(CONFIG_RANDOMIZE_BASE))
return kaslr_offset() > 0;
- /* Don't force KPTI for CPUs that are not vulnerable */
- if (is_midr_in_range_list(read_cpuid_id(), kpti_safe_list))
- return false;
-
- /* Defer to CPU feature registers */
- return !has_cpuid_feature(entry, scope);
+ return !meltdown_safe;
}
static void
@@ -2055,3 +2062,17 @@ static int __init enable_mrs_emulation(void)
}
core_initcall(enable_mrs_emulation);
+
+#ifdef CONFIG_GENERIC_CPU_VULNERABILITIES
+ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ if (arm64_kernel_unmapped_at_el0())
+ return sprintf(buf, "Mitigation: KPTI\n");
+
+ if (__meltdown_safe)
+ return sprintf(buf, "Not affected\n");
+
+ return sprintf(buf, "Vulnerable\n");
+}
+#endif
--
2.17.2
Powered by blists - more mailing lists