[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210212172128.GE7718@arm.com>
Date: Fri, 12 Feb 2021 17:21:28 +0000
From: Catalin Marinas <catalin.marinas@....com>
To: Vincenzo Frascino <vincenzo.frascino@....com>
Cc: linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
kasan-dev@...glegroups.com,
Andrew Morton <akpm@...ux-foundation.org>,
Will Deacon <will@...nel.org>,
Dmitry Vyukov <dvyukov@...gle.com>,
Andrey Ryabinin <aryabinin@...tuozzo.com>,
Alexander Potapenko <glider@...gle.com>,
Marco Elver <elver@...gle.com>,
Evgenii Stepanov <eugenis@...gle.com>,
Branislav Rankov <Branislav.Rankov@....com>,
Andrey Konovalov <andreyknvl@...gle.com>,
Lorenzo Pieralisi <lorenzo.pieralisi@....com>
Subject: Re: [PATCH v13 4/7] arm64: mte: Enable TCO in functions that can
read beyond buffer limits
On Thu, Feb 11, 2021 at 03:33:50PM +0000, Vincenzo Frascino wrote:
> diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
> index 706b7ab75f31..65ecb86dd886 100644
> --- a/arch/arm64/kernel/mte.c
> +++ b/arch/arm64/kernel/mte.c
> @@ -26,6 +26,10 @@ u64 gcr_kernel_excl __ro_after_init;
>
> static bool report_fault_once = true;
>
> +/* Whether the MTE asynchronous mode is enabled. */
> +DEFINE_STATIC_KEY_FALSE(mte_async_mode);
> +EXPORT_SYMBOL_GPL(mte_async_mode);
> +
> static void mte_sync_page_tags(struct page *page, pte_t *ptep, bool check_swap)
> {
> pte_t old_pte = READ_ONCE(*ptep);
> @@ -119,12 +123,24 @@ static inline void __mte_enable_kernel(const char *mode, unsigned long tcf)
> void mte_enable_kernel_sync(void)
> {
> __mte_enable_kernel("synchronous", SCTLR_ELx_TCF_SYNC);
> +
> + /*
> + * This function is called on each active smp core at boot
> + * time, hence we do not need to take cpu_hotplug_lock again.
> + */
> + static_branch_disable_cpuslocked(&mte_async_mode);
> }
> EXPORT_SYMBOL_GPL(mte_enable_kernel_sync);
>
> void mte_enable_kernel_async(void)
> {
> __mte_enable_kernel("asynchronous", SCTLR_ELx_TCF_ASYNC);
> +
> + /*
> + * This function is called on each active smp core at boot
> + * time, hence we do not need to take cpu_hotplug_lock again.
> + */
> + static_branch_enable_cpuslocked(&mte_async_mode);
> }
Sorry, I missed the cpuslocked aspect before. Is there any reason you
need to use this API here? I suggested to add it to the
mte_enable_kernel_sync() because kasan may at some point do this
dynamically at run-time, so the boot-time argument doesn't hold. But
it's also incorrect as this function will be called for hot-plugged
CPUs as well after boot.
The only reason for static_branch_*_cpuslocked() is if it's called from
a region that already invoked cpus_read_lock() which I don't think is
the case here.
--
Catalin
Powered by blists - more mailing lists