[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210524104513.13258-2-steven.price@arm.com>
Date: Mon, 24 May 2021 11:45:06 +0100
From: Steven Price <steven.price@....com>
To: Catalin Marinas <catalin.marinas@....com>,
Marc Zyngier <maz@...nel.org>, Will Deacon <will@...nel.org>
Cc: Steven Price <steven.price@....com>,
James Morse <james.morse@....com>,
Julien Thierry <julien.thierry.kdev@...il.com>,
Suzuki K Poulose <suzuki.poulose@....com>,
kvmarm@...ts.cs.columbia.edu, linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org, Dave Martin <Dave.Martin@....com>,
Mark Rutland <mark.rutland@....com>,
Thomas Gleixner <tglx@...utronix.de>, qemu-devel@...gnu.org,
Juan Quintela <quintela@...hat.com>,
"Dr. David Alan Gilbert" <dgilbert@...hat.com>,
Richard Henderson <richard.henderson@...aro.org>,
Peter Maydell <peter.maydell@...aro.org>,
Haibo Xu <Haibo.Xu@....com>, Andrew Jones <drjones@...hat.com>
Subject: [PATCH v13 1/8] arm64: mte: Handle race when synchronising tags
mte_sync_tags() used test_and_set_bit() to set the PG_mte_tagged flag
before restoring/zeroing the MTE tags. However if another thread were to
race and attempt to sync the tags on the same page before the first
thread had completed restoring/zeroing then it would see the flag is
already set and continue without waiting. This would potentially expose
the previous contents of the tags to user space, and cause any updates
that user space makes before the restoring/zeroing has completed to
potentially be lost.
Since this code is run from atomic contexts we can't just lock the page
during the process. Instead implement a new (global) spinlock to protect
the mte_sync_page_tags() function.
Fixes: 34bfeea4a9e9 ("arm64: mte: Clear the tags when a page is mapped in user-space with PROT_MTE")
Reviewed-by: Catalin Marinas <catalin.marinas@....com>
Signed-off-by: Steven Price <steven.price@....com>
---
---
arch/arm64/kernel/mte.c | 21 ++++++++++++++++++---
1 file changed, 18 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
index 125a10e413e9..45fac0e9c323 100644
--- a/arch/arm64/kernel/mte.c
+++ b/arch/arm64/kernel/mte.c
@@ -25,6 +25,7 @@
u64 gcr_kernel_excl __ro_after_init;
static bool report_fault_once = true;
+static DEFINE_SPINLOCK(tag_sync_lock);
#ifdef CONFIG_KASAN_HW_TAGS
/* Whether the MTE asynchronous mode is enabled. */
@@ -34,13 +35,22 @@ EXPORT_SYMBOL_GPL(mte_async_mode);
static void mte_sync_page_tags(struct page *page, pte_t *ptep, bool check_swap)
{
+ unsigned long flags;
pte_t old_pte = READ_ONCE(*ptep);
+ spin_lock_irqsave(&tag_sync_lock, flags);
+
+ /* Recheck with the lock held */
+ if (test_bit(PG_mte_tagged, &page->flags))
+ goto out;
+
if (check_swap && is_swap_pte(old_pte)) {
swp_entry_t entry = pte_to_swp_entry(old_pte);
- if (!non_swap_entry(entry) && mte_restore_tags(entry, page))
- return;
+ if (!non_swap_entry(entry) && mte_restore_tags(entry, page)) {
+ set_bit(PG_mte_tagged, &page->flags);
+ goto out;
+ }
}
page_kasan_tag_reset(page);
@@ -53,6 +63,10 @@ static void mte_sync_page_tags(struct page *page, pte_t *ptep, bool check_swap)
*/
smp_wmb();
mte_clear_page_tags(page_address(page));
+ set_bit(PG_mte_tagged, &page->flags);
+
+out:
+ spin_unlock_irqrestore(&tag_sync_lock, flags);
}
void mte_sync_tags(pte_t *ptep, pte_t pte)
@@ -60,10 +74,11 @@ void mte_sync_tags(pte_t *ptep, pte_t pte)
struct page *page = pte_page(pte);
long i, nr_pages = compound_nr(page);
bool check_swap = nr_pages == 1;
+ bool pte_is_tagged = pte_tagged(pte);
/* if PG_mte_tagged is set, tags have already been initialised */
for (i = 0; i < nr_pages; i++, page++) {
- if (!test_and_set_bit(PG_mte_tagged, &page->flags))
+ if (!test_bit(PG_mte_tagged, &page->flags))
mte_sync_page_tags(page, ptep, check_swap);
}
}
--
2.20.1
Powered by blists - more mailing lists