[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250905133035.275517-14-ardb+git@google.com>
Date: Fri, 5 Sep 2025 15:30:41 +0200
From: Ard Biesheuvel <ardb+git@...gle.com>
To: linux-efi@...r.kernel.org
Cc: linux-kernel@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
Ard Biesheuvel <ardb@...nel.org>, Will Deacon <will@...nel.org>, Mark Rutland <mark.rutland@....com>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>, Peter Zijlstra <peterz@...radead.org>
Subject: [PATCH v2 5/7] arm64/efi: Use a semaphore to protect the EFI stack
and FP/SIMD state
From: Ard Biesheuvel <ardb@...nel.org>
Replace the spinlock in the arm64 glue code with a semaphore, so that
the CPU can preempted while running the EFI runtime service.
Signed-off-by: Ard Biesheuvel <ardb@...nel.org>
---
arch/arm64/kernel/efi.c | 13 ++++++++++---
1 file changed, 10 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/kernel/efi.c b/arch/arm64/kernel/efi.c
index 9b03f3d77a25..8b999c07c7d1 100644
--- a/arch/arm64/kernel/efi.c
+++ b/arch/arm64/kernel/efi.c
@@ -165,12 +165,19 @@ asmlinkage efi_status_t efi_handle_corrupted_x18(efi_status_t s, const char *f)
return s;
}
-static DEFINE_RAW_SPINLOCK(efi_rt_lock);
+static DEFINE_SEMAPHORE(efi_rt_lock, 1);
bool arch_efi_call_virt_setup(void)
{
+ /*
+ * This might be called from a non-sleepable context so try to take the
+ * lock but don't block on it. This should never occur in practice, as
+ * all EFI runtime calls are serialized under the efi_runtime_lock.
+ */
+ if (WARN_ON(down_trylock(&efi_rt_lock)))
+ return false;
+
efi_virtmap_load();
- raw_spin_lock(&efi_rt_lock);
__efi_fpsimd_begin();
return true;
}
@@ -178,8 +185,8 @@ bool arch_efi_call_virt_setup(void)
void arch_efi_call_virt_teardown(void)
{
__efi_fpsimd_end();
- raw_spin_unlock(&efi_rt_lock);
efi_virtmap_unload();
+ up(&efi_rt_lock);
}
asmlinkage u64 *efi_rt_stack_top __ro_after_init;
--
2.51.0.355.g5224444f11-goog
Powered by blists - more mailing lists