[<prev] [next>] [day] [month] [year] [list]
Message-Id: <20260124022042.2168136-1-xujiakai2025@iscas.ac.cn>
Date: Sat, 24 Jan 2026 02:20:42 +0000
From: Jiakai Xu <jiakaipeanut@...il.com>
To: linux-kernel@...r.kernel.org,
linux-riscv@...ts.infradead.org,
kvm-riscv@...ts.infradead.org,
kvm@...r.kernel.org
Cc: Alexandre Ghiti <alex@...ti.fr>,
Albert Ou <aou@...s.berkeley.edu>,
Palmer Dabbelt <palmer@...belt.com>,
Paul Walmsley <pjw@...nel.org>,
Atish Patra <atish.patra@...ux.dev>,
Anup Patel <anup@...infault.org>,
Jiakai Xu <xujiakai2025@...as.ac.cn>,
Jiakai Xu <jiakaiPeanut@...il.com>
Subject: [PATCH] RISC-V: KVM: Validate SBI STA shmem alignment in kvm_sbi_ext_sta_set_reg
The RISC-V SBI Steal-Time Accounting (STA) extension requires the shared
memory physical address to be 64-byte aligned, and the shared memory size
to be at least 64 bytes.
KVM exposes the SBI STA shared memory configuration to userspace via
KVM_SET_ONE_REG. However, the current implementation of
kvm_sbi_ext_sta_set_reg() does not validate the alignment of the configured
shared memory address. As a result, userspace can install a misaligned
shared memory address that violates the SBI specification.
Such an invalid configuration may later reach runtime code paths that
assume a valid and properly aligned shared memory region. In particular,
KVM_RUN can trigger the following WARN_ON in
kvm_riscv_vcpu_record_steal_time():
WARNING: arch/riscv/kvm/vcpu_sbi_sta.c:49 at
kvm_riscv_vcpu_record_steal_time
WARN_ON paths are not expected to be reachable during normal runtime
execution, and may result in a kernel panic when panic_on_warn is enabled.
Fix this by validating the shared memory alignment at the
KVM_SET_ONE_REG boundary and rejecting misaligned configurations with
-EINVAL. The validation is performed on a temporary computed address and
only committed to vcpu->arch.sta.shmem once it is known to be valid,
similar to the existing logic in kvm_sbi_sta_steal_time_set_shmem() and
kvm_sbi_ext_sta_handler().
With this change, invalid userspace state is rejected early and cannot
reach runtime code paths that rely on the SBI specification invariants.
A reproducer triggering the WARN_ON and the complete kernel log are
available at: https://github.com/j1akai/temp/tree/main/20260124
Signed-off-by: Jiakai Xu <xujiakai2025@...as.ac.cn>
Signed-off-by: Jiakai Xu <jiakaiPeanut@...il.com>
---
arch/riscv/kvm/vcpu_sbi_sta.c | 15 ++++++++++-----
1 file changed, 10 insertions(+), 5 deletions(-)
diff --git a/arch/riscv/kvm/vcpu_sbi_sta.c b/arch/riscv/kvm/vcpu_sbi_sta.c
index afa0545c3bcfc..7dfe671c42eaa 100644
--- a/arch/riscv/kvm/vcpu_sbi_sta.c
+++ b/arch/riscv/kvm/vcpu_sbi_sta.c
@@ -186,23 +186,25 @@ static int kvm_sbi_ext_sta_set_reg(struct kvm_vcpu *vcpu, unsigned long reg_num,
return -EINVAL;
value = *(const unsigned long *)reg_val;
+ gpa_t new_shmem = vcpu->arch.sta.shmem;
+
switch (reg_num) {
case KVM_REG_RISCV_SBI_STA_REG(shmem_lo):
if (IS_ENABLED(CONFIG_32BIT)) {
gpa_t hi = upper_32_bits(vcpu->arch.sta.shmem);
- vcpu->arch.sta.shmem = value;
- vcpu->arch.sta.shmem |= hi << 32;
+ new_shmem = value;
+ new_shmem |= hi << 32;
} else {
- vcpu->arch.sta.shmem = value;
+ new_shmem = value;
}
break;
case KVM_REG_RISCV_SBI_STA_REG(shmem_hi):
if (IS_ENABLED(CONFIG_32BIT)) {
gpa_t lo = lower_32_bits(vcpu->arch.sta.shmem);
- vcpu->arch.sta.shmem = ((gpa_t)value << 32);
- vcpu->arch.sta.shmem |= lo;
+ new_shmem = ((gpa_t)value << 32);
+ new_shmem |= lo;
} else if (value != 0) {
return -EINVAL;
}
@@ -210,7 +212,10 @@ static int kvm_sbi_ext_sta_set_reg(struct kvm_vcpu *vcpu, unsigned long reg_num,
default:
return -ENOENT;
}
+ if (new_shmem && !IS_ALIGNED(new_shmem, 64))
+ return -EINVAL;
+ vcpu->arch.sta.shmem = new_shmem;
return 0;
}
--
2.34.1
Powered by blists - more mailing lists