[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20250807114220.559098-1-tjytimi@163.com>
Date: Thu, 7 Aug 2025 19:42:20 +0800
From: Jinyu Tang <tjytimi@....com>
To: Anup Patel <anup@...infault.org>,
Atish Patra <atish.patra@...ux.dev>,
Conor Dooley <conor.dooley@...rochip.com>,
Yong-Xuan Wang <yongxuan.wang@...ive.com>,
Paul Walmsley <paul.walmsley@...ive.com>
Cc: kvm@...r.kernel.org,
kvm-riscv@...ts.infradead.org,
linux-riscv@...ts.infradead.org,
linux-kernel@...r.kernel.org,
Jinyu Tang <tjytimi@....com>
Subject: [PATCH] riscv: skip csr restore if vcpu preempted reload
The kvm_arch_vcpu_load() function is called in two cases for riscv:
1. When entering KVM_RUN from userspace ioctl.
2. When a preempted VCPU is scheduled back.
In the second case, if no other KVM VCPU has run on this CPU since the
current VCPU was preempted, the guest CSR values are still valid in
the hardware and do not need to be restored.
This patch is to skip the CSR write path when:
1. The VCPU was previously preempted
(vcpu->scheduled_out == 1).
2. It is being reloaded on the same physical CPU
(vcpu->arch.last_exit_cpu == cpu).
3. No other KVM VCPU has used this CPU in the meantime
(vcpu == __this_cpu_read(kvm_former_vcpu)).
This reduces many CSR writes with frequent preemption on the same CPU.
Signed-off-by: Jinyu Tang <tjytimi@....com>
---
arch/riscv/kvm/vcpu.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c
index f001e5640..1c6c55ee1 100644
--- a/arch/riscv/kvm/vcpu.c
+++ b/arch/riscv/kvm/vcpu.c
@@ -25,6 +25,8 @@
#define CREATE_TRACE_POINTS
#include "trace.h"
+static DEFINE_PER_CPU(struct kvm_vcpu *, kvm_former_vcpu);
+
const struct _kvm_stats_desc kvm_vcpu_stats_desc[] = {
KVM_GENERIC_VCPU_STATS(),
STATS_DESC_COUNTER(VCPU, ecall_exit_stat),
@@ -581,6 +583,10 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr;
struct kvm_vcpu_config *cfg = &vcpu->arch.cfg;
+ if (vcpu->scheduled_out && vcpu == __this_cpu_read(kvm_former_vcpu) &&
+ vcpu->arch.last_exit_cpu == cpu)
+ goto csr_restore_done;
+
if (kvm_riscv_nacl_sync_csr_available()) {
nsh = nacl_shmem();
nacl_csr_write(nsh, CSR_VSSTATUS, csr->vsstatus);
@@ -624,6 +630,7 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
kvm_riscv_mmu_update_hgatp(vcpu);
+csr_restore_done:
kvm_riscv_vcpu_timer_restore(vcpu);
kvm_riscv_vcpu_host_fp_save(&vcpu->arch.host_context);
@@ -645,6 +652,8 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
void *nsh;
struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr;
+ __this_cpu_write(kvm_former_vcpu, vcpu);
+
vcpu->cpu = -1;
kvm_riscv_vcpu_aia_put(vcpu);
--
2.43.0
Powered by blists - more mailing lists