[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240820043543.837914-3-suleiman@google.com>
Date: Tue, 20 Aug 2024 13:35:42 +0900
From: Suleiman Souhlal <suleiman@...gle.com>
To: Paolo Bonzini <pbonzini@...hat.com>, Sean Christopherson <seanjc@...gle.com>
Cc: Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>, x86@...nel.org,
"H. Peter Anvin" <hpa@...or.com>, Chao Gao <chao.gao@...el.com>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org, ssouhlal@...ebsd.org,
Suleiman Souhlal <suleiman@...gle.com>
Subject: [PATCH v2 2/3] KVM: x86: Include host suspended time in steal time.
When the host resumes from a suspend, the guest thinks any task
that was running during the suspend ran for a long time, even though
the effective run time was much shorter, which can end up having
negative effects with scheduling. This can be particularly noticeable
if the guest task was RT, as it can end up getting throttled for a
long time.
To mitigate this issue, we include the time that the host was
suspended in steal time, which lets the guest subtract the duration from
the tasks' runtime.
Note that the case of a suspend happening during a VM migration
might not be accounted.
Signed-off-by: Suleiman Souhlal <suleiman@...gle.com>
---
arch/x86/include/asm/kvm_host.h | 1 +
arch/x86/kvm/x86.c | 11 ++++++++++-
2 files changed, 11 insertions(+), 1 deletion(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 4a68cb3eba78f8..728798decb6d12 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -898,6 +898,7 @@ struct kvm_vcpu_arch {
u8 preempted;
u64 msr_val;
u64 last_steal;
+ u64 last_suspend_ns;
struct gfn_to_hva_cache cache;
} st;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 70219e4069874a..104f3d318026fa 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -3654,7 +3654,7 @@ static void record_steal_time(struct kvm_vcpu *vcpu)
struct kvm_steal_time __user *st;
struct kvm_memslots *slots;
gpa_t gpa = vcpu->arch.st.msr_val & KVM_STEAL_VALID_BITS;
- u64 steal;
+ u64 steal, suspend_ns;
u32 version;
if (kvm_xen_msr_enabled(vcpu->kvm)) {
@@ -3735,6 +3735,14 @@ static void record_steal_time(struct kvm_vcpu *vcpu)
steal += current->sched_info.run_delay -
vcpu->arch.st.last_steal;
vcpu->arch.st.last_steal = current->sched_info.run_delay;
+ /*
+ * Include the time that the host was suspended in steal time.
+ * Note that the case of a suspend happening during a VM migration
+ * might not be accounted.
+ */
+ suspend_ns = kvm_total_suspend_ns();
+ steal += suspend_ns - vcpu->arch.st.last_suspend_ns;
+ vcpu->arch.st.last_suspend_ns = suspend_ns;
unsafe_put_user(steal, &st->steal, out);
version += 1;
@@ -12280,6 +12288,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
vcpu->arch.arch_capabilities = kvm_get_arch_capabilities();
vcpu->arch.msr_platform_info = MSR_PLATFORM_INFO_CPUID_FAULT;
+ vcpu->arch.st.last_suspend_ns = kvm_total_suspend_ns();
kvm_xen_init_vcpu(vcpu);
vcpu_load(vcpu);
kvm_set_tsc_khz(vcpu, vcpu->kvm->arch.default_tsc_khz);
--
2.46.0.184.g6999bdac58-goog
Powered by blists - more mailing lists