lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 28 Apr 2021 23:17:13 -0700
From:   Palmer Dabbelt <palmer@...belt.com>
To:         linux-riscv@...ts.infradead.org
Cc:     Paul Walmsley <paul.walmsley@...ive.com>,
        Palmer Dabbelt <palmer@...belt.com>, aou@...s.berkeley.edu,
        peterz@...radead.org, jpoimboe@...hat.com, jbaron@...mai.com,
        rostedt@...dmis.org, ardb@...nel.org,
        Atish Patra <Atish.Patra@....com>,
        Anup Patel <Anup.Patel@....com>, akpm@...ux-foundation.org,
        rppt@...nel.org, mhiramat@...nel.org, zong.li@...ive.com,
        guoren@...ux.alibaba.com, wangkefeng.wang@...wei.com,
        0x7f454c46@...il.com, chenhuang5@...wei.com,
        linux-riscv@...ts.infradead.org, linux-kernel@...r.kernel.org,
        kernel-team@...roid.com, Palmer Dabbelt <palmerdabbelt@...gle.com>,
        Changbin Du <changbin.du@...il.com>
Subject: [PATCH] RISC-V: insn: Use a raw spinlock to protect TEXT_POKE*

From: Palmer Dabbelt <palmerdabbelt@...gle.com>

We currently use text_mutex to protect the fixmap sections from
concurrent callers.  This is convienent for kprobes as the generic code
already holds text_mutex, but ftrace doesn't which triggers a lockdep
assertion.  We could take text_mutex for ftrace, but the jump label
implementation (which is currently taking text_mutex) isn't explicitly
listed as being sleepable and it's called from enough places it seems
safer to just avoid sleeping.

arm64 and parisc, the other two TEXT_POKE-style patching
implemnetations, already use raw spinlocks.  abffa6f3b157 ("arm64:
convert patch_lock to raw lock") lays out the case for a raw spinlock as
opposed to a regular spinlock, and while I don't know of anyone using rt
on RISC-V I'm sure it'll eventually show up and I don't see any reason
to wait.

Fixes: ebc00dde8a97 ("riscv: Add jump-label implementation")
Reported-by: Changbin Du <changbin.du@...il.com>
Signed-off-by: Palmer Dabbelt <palmerdabbelt@...gle.com>
---
 arch/riscv/include/asm/fixmap.h |  3 +++
 arch/riscv/kernel/jump_label.c  |  2 --
 arch/riscv/kernel/patch.c       | 13 +++++++++----
 3 files changed, 12 insertions(+), 6 deletions(-)

diff --git a/arch/riscv/include/asm/fixmap.h b/arch/riscv/include/asm/fixmap.h
index 54cbf07fb4e9..d1c0a1f123cf 100644
--- a/arch/riscv/include/asm/fixmap.h
+++ b/arch/riscv/include/asm/fixmap.h
@@ -24,8 +24,11 @@ enum fixed_addresses {
 	FIX_HOLE,
 	FIX_PTE,
 	FIX_PMD,
+
+	/* Only used in kernel/insn.c */
 	FIX_TEXT_POKE1,
 	FIX_TEXT_POKE0,
+
 	FIX_EARLYCON_MEM_BASE,
 
 	__end_of_permanent_fixed_addresses,
diff --git a/arch/riscv/kernel/jump_label.c b/arch/riscv/kernel/jump_label.c
index 20e09056d141..45bb32f91b5c 100644
--- a/arch/riscv/kernel/jump_label.c
+++ b/arch/riscv/kernel/jump_label.c
@@ -35,9 +35,7 @@ void arch_jump_label_transform(struct jump_entry *entry,
 		insn = RISCV_INSN_NOP;
 	}
 
-	mutex_lock(&text_mutex);
 	patch_text_nosync(addr, &insn, sizeof(insn));
-	mutex_unlock(&text_mutex);
 }
 
 void arch_jump_label_transform_static(struct jump_entry *entry,
diff --git a/arch/riscv/kernel/patch.c b/arch/riscv/kernel/patch.c
index 0b552873a577..dfa7ee8eb63f 100644
--- a/arch/riscv/kernel/patch.c
+++ b/arch/riscv/kernel/patch.c
@@ -19,6 +19,8 @@ struct patch_insn {
 	atomic_t cpu_count;
 };
 
+static DEFINE_RAW_SPINLOCK(patch_lock);
+
 #ifdef CONFIG_MMU
 /*
  * The fix_to_virt(, idx) needs a const value (not a dynamic variable of
@@ -54,13 +56,14 @@ static int patch_insn_write(void *addr, const void *insn, size_t len)
 	void *waddr = addr;
 	bool across_pages = (((uintptr_t) addr & ~PAGE_MASK) + len) > PAGE_SIZE;
 	int ret;
+	unsigned long flags = 0;
 
 	/*
-	 * Before reaching here, it was expected to lock the text_mutex
-	 * already, so we don't need to give another lock here and could
-	 * ensure that it was safe between each cores.
+	 * FIX_TEXT_POKE{0,1} are only used for text patching, but we must
+	 * ensure that concurrent callers do not re-map these before we're done
+	 * with them.
 	 */
-	lockdep_assert_held(&text_mutex);
+	raw_spin_lock_irqsave(&patch_lock, flags);
 
 	if (across_pages)
 		patch_map(addr + len, FIX_TEXT_POKE1);
@@ -74,6 +77,8 @@ static int patch_insn_write(void *addr, const void *insn, size_t len)
 	if (across_pages)
 		patch_unmap(FIX_TEXT_POKE1);
 
+	raw_spin_unlock_irqrestore(&patch_lock, flags);
+
 	return ret;
 }
 NOKPROBE_SYMBOL(patch_insn_write);
-- 
2.31.1.498.g6c1eba8ee3d-goog

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ