[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <875yynz5wp.wl-maz@kernel.org>
Date: Wed, 09 Jun 2021 11:30:46 +0100
From: Marc Zyngier <maz@...nel.org>
To: Steven Price <steven.price@....com>
Cc: Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will@...nel.org>,
James Morse <james.morse@....com>,
Julien Thierry <julien.thierry.kdev@...il.com>,
Suzuki K Poulose <suzuki.poulose@....com>,
kvmarm@...ts.cs.columbia.edu, linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org, Dave Martin <Dave.Martin@....com>,
Mark Rutland <mark.rutland@....com>,
Thomas Gleixner <tglx@...utronix.de>, qemu-devel@...gnu.org,
Juan Quintela <quintela@...hat.com>,
"Dr. David Alan Gilbert" <dgilbert@...hat.com>,
Richard Henderson <richard.henderson@...aro.org>,
Peter Maydell <peter.maydell@...aro.org>,
Haibo Xu <Haibo.Xu@....com>, Andrew Jones <drjones@...hat.com>
Subject: Re: [PATCH v14 1/8] arm64: mte: Handle race when synchronising tags
On Mon, 07 Jun 2021 12:08:09 +0100,
Steven Price <steven.price@....com> wrote:
>
> mte_sync_tags() used test_and_set_bit() to set the PG_mte_tagged flag
> before restoring/zeroing the MTE tags. However if another thread were to
> race and attempt to sync the tags on the same page before the first
> thread had completed restoring/zeroing then it would see the flag is
> already set and continue without waiting. This would potentially expose
> the previous contents of the tags to user space, and cause any updates
> that user space makes before the restoring/zeroing has completed to
> potentially be lost.
>
> Since this code is run from atomic contexts we can't just lock the page
> during the process. Instead implement a new (global) spinlock to protect
> the mte_sync_page_tags() function.
>
> Fixes: 34bfeea4a9e9 ("arm64: mte: Clear the tags when a page is mapped in user-space with PROT_MTE")
> Reviewed-by: Catalin Marinas <catalin.marinas@....com>
> Signed-off-by: Steven Price <steven.price@....com>
> ---
> arch/arm64/kernel/mte.c | 20 +++++++++++++++++---
> 1 file changed, 17 insertions(+), 3 deletions(-)
>
> diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
> index 125a10e413e9..a3583a7fd400 100644
> --- a/arch/arm64/kernel/mte.c
> +++ b/arch/arm64/kernel/mte.c
> @@ -25,6 +25,7 @@
> u64 gcr_kernel_excl __ro_after_init;
>
> static bool report_fault_once = true;
> +static DEFINE_SPINLOCK(tag_sync_lock);
>
> #ifdef CONFIG_KASAN_HW_TAGS
> /* Whether the MTE asynchronous mode is enabled. */
> @@ -34,13 +35,22 @@ EXPORT_SYMBOL_GPL(mte_async_mode);
>
> static void mte_sync_page_tags(struct page *page, pte_t *ptep, bool check_swap)
> {
> + unsigned long flags;
> pte_t old_pte = READ_ONCE(*ptep);
>
> + spin_lock_irqsave(&tag_sync_lock, flags);
having though a bit more about this after an offline discussion with
Catalin: why can't this lock be made per mm? We can't really share
tags across processes anyway, so this is limited to threads from the
same process.
I'd also like it to be documented that page sharing can only reliably
work with tagging if only one of the mappings is using tags.
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
Powered by blists - more mailing lists