[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190131175418.24b7811c@donnerap.cambridge.arm.com>
Date: Thu, 31 Jan 2019 17:54:18 +0000
From: Andre Przywara <andre.przywara@....com>
To: Jeremy Linton <jeremy.linton@....com>
Cc: linux-arm-kernel@...ts.infradead.org, stefan.wahren@...e.com,
mlangsdo@...hat.com, suzuki.poulose@....com, marc.zyngier@....com,
catalin.marinas@....com, julien.thierry@....com,
will.deacon@....com, linux-kernel@...r.kernel.org,
steven.price@....com, ykaukab@...e.de, dave.martin@....com,
shankerd@...eaurora.org
Subject: Re: [PATCH v4 07/12] arm64: add sysfs vulnerability show for
meltdown
On Fri, 25 Jan 2019 12:07:06 -0600
Jeremy Linton <jeremy.linton@....com> wrote:
Hi,
> Display the mitigation status if active, otherwise
> assume the cpu is safe unless it doesn't have CSV3
> and isn't in our whitelist.
>
> Signed-off-by: Jeremy Linton <jeremy.linton@....com>
> ---
> arch/arm64/kernel/cpufeature.c | 33 +++++++++++++++++++++++++++------
> 1 file changed, 27 insertions(+), 6 deletions(-)
>
> diff --git a/arch/arm64/kernel/cpufeature.c
> b/arch/arm64/kernel/cpufeature.c
> index a9e18b9cdc1e..624dfe0b5cdd 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -944,6 +944,8 @@ has_useable_cnp(const struct arm64_cpu_capabilities *entry, int scope)
> return has_cpuid_feature(entry, scope);
> }
>
> +/* default value is invalid until unmap_kernel_at_el0() runs */
Shall we somehow enforce this? For instance by making __meltdown_safe
an enum, initialised to UNKNOWN?
Then bail out with a BUG_ON or WARN_ON in the sysfs code?
I just want to avoid to accidentally report "safe" when we actually
aren't.
> +static bool __meltdown_safe = true;
> static int __kpti_forced; /* 0: not forced, >0: forced on, <0: forced off */
> static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
> @@ -962,6 +964,16 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
> { /* sentinel */ }
> };
> char const *str = "command line option";
> + bool meltdown_safe;
> +
> + meltdown_safe = is_midr_in_range_list(read_cpuid_id(), kpti_safe_list);
> +
> + /* Defer to CPU feature registers */
> + if (has_cpuid_feature(entry, scope))
> + meltdown_safe = true;
> +
> + if (!meltdown_safe)
> + __meltdown_safe = false;
>
> /*
> * For reasons that aren't entirely clear, enabling KPTI on Cavium
> @@ -984,12 +996,7 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
> if (IS_ENABLED(CONFIG_RANDOMIZE_BASE))
> return kaslr_offset() > 0;
>
> - /* Don't force KPTI for CPUs that are not vulnerable */
> - if (is_midr_in_range_list(read_cpuid_id(), kpti_safe_list))
> - return false;
> -
> - /* Defer to CPU feature registers */
> - return !has_cpuid_feature(entry, scope);
> + return !meltdown_safe;
> }
>
> static void
> @@ -2055,3 +2062,17 @@ static int __init enable_mrs_emulation(void)
> }
>
> core_initcall(enable_mrs_emulation);
> +
> +#ifdef CONFIG_GENERIC_CPU_VULNERABILITIES
> +ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr,
> + char *buf)
w/s issue.
Cheers,
Andre.
> +{
> + if (arm64_kernel_unmapped_at_el0())
> + return sprintf(buf, "Mitigation: KPTI\n");
> +
> + if (__meltdown_safe)
> + return sprintf(buf, "Not affected\n");
> +
> + return sprintf(buf, "Vulnerable\n");
> +}
> +#endif
Powered by blists - more mailing lists