[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20250721-pmu_event_info-v4-7-ac76758a4269@rivosinc.com>
Date: Mon, 21 Jul 2025 20:15:23 -0700
From: Atish Patra <atishp@...osinc.com>
To: Anup Patel <anup@...infault.org>, Will Deacon <will@...nel.org>,
Mark Rutland <mark.rutland@....com>,
Paul Walmsley <paul.walmsley@...ive.com>,
Palmer Dabbelt <palmer@...belt.com>,
Mayuresh Chitale <mchitale@...tanamicro.com>
Cc: linux-riscv@...ts.infradead.org, linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
kvm-riscv@...ts.infradead.org, Atish Patra <atishp@...osinc.com>
Subject: [PATCH v4 7/9] RISC-V: KVM: Use the new gpa range validate helper
function
Remove the duplicate code and use the new helper function to validate
the shared memory gpa address.
Reviewed-by: Anup Patel <anup@...infault.org>
Signed-off-by: Atish Patra <atishp@...osinc.com>
---
arch/riscv/kvm/vcpu_pmu.c | 5 +----
arch/riscv/kvm/vcpu_sbi_sta.c | 6 ++----
2 files changed, 3 insertions(+), 8 deletions(-)
diff --git a/arch/riscv/kvm/vcpu_pmu.c b/arch/riscv/kvm/vcpu_pmu.c
index 15d71a7b75ba..163bd4403fd0 100644
--- a/arch/riscv/kvm/vcpu_pmu.c
+++ b/arch/riscv/kvm/vcpu_pmu.c
@@ -409,8 +409,6 @@ int kvm_riscv_vcpu_pmu_snapshot_set_shmem(struct kvm_vcpu *vcpu, unsigned long s
int snapshot_area_size = sizeof(struct riscv_pmu_snapshot_data);
int sbiret = 0;
gpa_t saddr;
- unsigned long hva;
- bool writable;
if (!kvpmu || flags) {
sbiret = SBI_ERR_INVALID_PARAM;
@@ -432,8 +430,7 @@ int kvm_riscv_vcpu_pmu_snapshot_set_shmem(struct kvm_vcpu *vcpu, unsigned long s
goto out;
}
- hva = kvm_vcpu_gfn_to_hva_prot(vcpu, saddr >> PAGE_SHIFT, &writable);
- if (kvm_is_error_hva(hva) || !writable) {
+ if (kvm_vcpu_validate_gpa_range(vcpu, saddr, PAGE_SIZE, true)) {
sbiret = SBI_ERR_INVALID_ADDRESS;
goto out;
}
diff --git a/arch/riscv/kvm/vcpu_sbi_sta.c b/arch/riscv/kvm/vcpu_sbi_sta.c
index 5f35427114c1..67dfb613df6a 100644
--- a/arch/riscv/kvm/vcpu_sbi_sta.c
+++ b/arch/riscv/kvm/vcpu_sbi_sta.c
@@ -85,8 +85,6 @@ static int kvm_sbi_sta_steal_time_set_shmem(struct kvm_vcpu *vcpu)
unsigned long shmem_phys_hi = cp->a1;
u32 flags = cp->a2;
struct sbi_sta_struct zero_sta = {0};
- unsigned long hva;
- bool writable;
gpa_t shmem;
int ret;
@@ -111,8 +109,8 @@ static int kvm_sbi_sta_steal_time_set_shmem(struct kvm_vcpu *vcpu)
return SBI_ERR_INVALID_ADDRESS;
}
- hva = kvm_vcpu_gfn_to_hva_prot(vcpu, shmem >> PAGE_SHIFT, &writable);
- if (kvm_is_error_hva(hva) || !writable)
+ /* The spec requires the shmem to be 64-byte aligned. */
+ if (kvm_vcpu_validate_gpa_range(vcpu, shmem, 64, true))
return SBI_ERR_INVALID_ADDRESS;
ret = kvm_vcpu_write_guest(vcpu, shmem, &zero_sta, sizeof(zero_sta));
--
2.43.0
Powered by blists - more mailing lists