[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250514174339.1834871-14-ardb+git@google.com>
Date: Wed, 14 May 2025 19:43:45 +0200
From: Ard Biesheuvel <ardb+git@...gle.com>
To: linux-efi@...r.kernel.org
Cc: linux-kernel@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
Ard Biesheuvel <ardb@...nel.org>, Will Deacon <will@...nel.org>, Mark Rutland <mark.rutland@....com>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>, Peter Zijlstra <peterz@...radead.org>
Subject: [RFC PATCH 5/7] arm64/efi: Use a semaphore to protect the EFI stack
and FP/SIMD state
From: Ard Biesheuvel <ardb@...nel.org>
Replace the spinlock in the arm64 glue code with a semaphore, so that
the CPU can preempted while running the EFI runtime service.
Signed-off-by: Ard Biesheuvel <ardb@...nel.org>
---
arch/arm64/kernel/efi.c | 13 ++++++++++---
1 file changed, 10 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/kernel/efi.c b/arch/arm64/kernel/efi.c
index 44ad5e759af4..d01ae156bb63 100644
--- a/arch/arm64/kernel/efi.c
+++ b/arch/arm64/kernel/efi.c
@@ -164,12 +164,19 @@ asmlinkage efi_status_t efi_handle_corrupted_x18(efi_status_t s, const char *f)
return s;
}
-static DEFINE_RAW_SPINLOCK(efi_rt_lock);
+static DEFINE_SEMAPHORE(efi_rt_lock, 1);
bool arch_efi_call_virt_setup(void)
{
+ /*
+ * This might be called from a non-sleepable context so try to take the
+ * lock but don't block on it. This should never occur in practice, as
+ * all EFI runtime calls are serialized under the efi_runtime_lock.
+ */
+ if (WARN_ON(down_trylock(&efi_rt_lock)))
+ return false;
+
efi_virtmap_load();
- raw_spin_lock(&efi_rt_lock);
__efi_fpsimd_begin();
return true;
}
@@ -177,8 +184,8 @@ bool arch_efi_call_virt_setup(void)
void arch_efi_call_virt_teardown(void)
{
__efi_fpsimd_end();
- raw_spin_unlock(&efi_rt_lock);
efi_virtmap_unload();
+ up(&efi_rt_lock);
}
asmlinkage u64 *efi_rt_stack_top __ro_after_init;
--
2.49.0.1101.gccaa498523-goog
Powered by blists - more mailing lists