[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210107172908.42686-5-vincenzo.frascino@arm.com>
Date: Thu, 7 Jan 2021 17:29:08 +0000
From: Vincenzo Frascino <vincenzo.frascino@....com>
To: linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
kasan-dev@...glegroups.com
Cc: Vincenzo Frascino <vincenzo.frascino@....com>,
Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will@...nel.org>,
Dmitry Vyukov <dvyukov@...gle.com>,
Andrey Ryabinin <aryabinin@...tuozzo.com>,
Alexander Potapenko <glider@...gle.com>,
Marco Elver <elver@...gle.com>,
Evgenii Stepanov <eugenis@...gle.com>,
Branislav Rankov <Branislav.Rankov@....com>,
Andrey Konovalov <andreyknvl@...gle.com>
Subject: [PATCH v2 4/4] arm64: mte: Optimize mte_assign_mem_tag_range()
mte_assign_mem_tag_range() is called on production KASAN HW hot
paths. It makes sense to optimize it in an attempt to reduce the
overhead.
Optimize mte_assign_mem_tag_range() based on the indications provided at
[1].
[1] https://lore.kernel.org/r/CAAeHK+wCO+J7D1_T89DG+jJrPLk3X9RsGFKxJGd0ZcUFjQT-9Q@mail.gmail.com/
Cc: Catalin Marinas <catalin.marinas@....com>
Cc: Will Deacon <will@...nel.org>
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@....com>
---
arch/arm64/include/asm/mte.h | 26 +++++++++++++++++++++++++-
arch/arm64/lib/mte.S | 15 ---------------
2 files changed, 25 insertions(+), 16 deletions(-)
diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h
index a60d3718baae..c341ce6b9779 100644
--- a/arch/arm64/include/asm/mte.h
+++ b/arch/arm64/include/asm/mte.h
@@ -50,7 +50,31 @@ long get_mte_ctrl(struct task_struct *task);
int mte_ptrace_copy_tags(struct task_struct *child, long request,
unsigned long addr, unsigned long data);
-void mte_assign_mem_tag_range(void *addr, size_t size);
+static inline void mte_assign_mem_tag_range(void *addr, size_t size)
+{
+ u64 _addr = (u64)addr;
+ u64 _end = _addr + size;
+
+ /*
+ * This function must be invoked from an MTE enabled context.
+ *
+ * Note: The address must be non-NULL and MTE_GRANULE_SIZE aligned and
+ * size must be non-zero and MTE_GRANULE_SIZE aligned.
+ */
+ do {
+ /*
+ * 'asm volatile' is required to prevent the compiler to move
+ * the statement outside of the loop.
+ */
+ asm volatile(__MTE_PREAMBLE "stg %0, [%0]"
+ :
+ : "r" (_addr)
+ : "memory");
+
+ _addr += MTE_GRANULE_SIZE;
+ } while (_addr < _end);
+}
+
#else /* CONFIG_ARM64_MTE */
diff --git a/arch/arm64/lib/mte.S b/arch/arm64/lib/mte.S
index 9e1a12e10053..a0a650451510 100644
--- a/arch/arm64/lib/mte.S
+++ b/arch/arm64/lib/mte.S
@@ -150,18 +150,3 @@ SYM_FUNC_START(mte_restore_page_tags)
ret
SYM_FUNC_END(mte_restore_page_tags)
-/*
- * Assign allocation tags for a region of memory based on the pointer tag
- * x0 - source pointer
- * x1 - size
- *
- * Note: The address must be non-NULL and MTE_GRANULE_SIZE aligned and
- * size must be non-zero and MTE_GRANULE_SIZE aligned.
- */
-SYM_FUNC_START(mte_assign_mem_tag_range)
-1: stg x0, [x0]
- add x0, x0, #MTE_GRANULE_SIZE
- subs x1, x1, #MTE_GRANULE_SIZE
- b.gt 1b
- ret
-SYM_FUNC_END(mte_assign_mem_tag_range)
--
2.30.0
Powered by blists - more mailing lists