[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <CB14B1BC-224D-4C10-969C-9C6C7E28F76B@sjtu.edu.cn>
Date: Sun, 26 Jan 2025 18:33:12 +0800
From: Zheyun Shen <szy0127@...u.edu.cn>
To: Nikunj A Dadhania <nikunj@....com>
Cc: thomas.lendacky@....com,
seanjc@...gle.com,
pbonzini@...hat.com,
tglx@...utronix.de,
kevinloughlin@...gle.com,
mingo@...hat.com,
bp@...en8.de,
kvm@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v5 3/3] KVM: SVM: Flush cache only on CPUs running SEV
guest
> Nikunj A Dadhania <nikunj@....com> writes:
>
> Zheyun Shen <szy0127@...u.edu.cn> writes:
>
>> On AMD CPUs without ensuring cache consistency, each memory page
>> reclamation in an SEV guest triggers a call to wbinvd_on_all_cpus(),
>> thereby affecting the performance of other programs on the host.
>>
>> Typically, an AMD server may have 128 cores or more, while the SEV guest
>> might only utilize 8 of these cores. Meanwhile, host can use qemu-affinity
>> to bind these 8 vCPUs to specific physical CPUs.
>>
>> Therefore, keeping a record of the physical core numbers each time a vCPU
>> runs can help avoid flushing the cache for all CPUs every time.
>>
>> Suggested-by: Sean Christopherson <seanjc@...gle.com>
>> Signed-off-by: Zheyun Shen <szy0127@...u.edu.cn>
>> ---
>> arch/x86/kvm/svm/sev.c | 39 ++++++++++++++++++++++++++++++++++++---
>> arch/x86/kvm/svm/svm.c | 2 ++
>> arch/x86/kvm/svm/svm.h | 5 ++++-
>> 3 files changed, 42 insertions(+), 4 deletions(-)
>>
>> diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
>> index 1ce67de9d..91469edd1 100644
>> --- a/arch/x86/kvm/svm/sev.c
>> +++ b/arch/x86/kvm/svm/sev.c
>> @@ -252,6 +252,36 @@ static void sev_asid_free(struct kvm_sev_info *sev)
>> sev->misc_cg = NULL;
>> }
>>
>> +static struct cpumask *sev_get_wbinvd_dirty_mask(struct kvm *kvm)
>> +{
>> + struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
>
> There is a helper to get sev_info: to_kvm_sev_info(), if you use that,
> sev_get_wbinvd_dirty_mask() helper will not be needed.
>
>> +
>> + return sev->wbinvd_dirty_mask;
>> +}
>> +
>> +void sev_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
>> +{
>> + /*
>> + * To optimize cache flushes when memory is reclaimed from an SEV VM,
>> + * track physical CPUs that enter the guest for SEV VMs and thus can
>> + * have encrypted, dirty data in the cache, and flush caches only for
>> + * CPUs that have entered the guest.
>> + */
>> + cpumask_set_cpu(cpu, sev_get_wbinvd_dirty_mask(vcpu->kvm));
>> +}
>> +
>> +static void sev_do_wbinvd(struct kvm *kvm)
>> +{
>> + struct cpumask *dirty_mask = sev_get_wbinvd_dirty_mask(kvm);
>> +
>> + /*
>> + * TODO: Clear CPUs from the bitmap prior to flushing. Doing so
>> + * requires serializing multiple calls and having CPUs mark themselves
>> + * "dirty" if they are currently running a vCPU for the VM.
>> + */
>> + wbinvd_on_many_cpus(dirty_mask);
>> +}
>
> Something like the below
>
> void sev_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
> {
> /* ... */
> cpumask_set_cpu(cpu, to_kvm_sev_info(kvm)->wbinvd_dirty_mask);
> }
>
> static void sev_do_wbinvd(struct kvm *kvm)
> {
> /* ... */
> wbinvd_on_many_cpus(to_kvm_sev_info(kvm)->wbinvd_dirty_mask);
> }
>
> Regards,
> Nikunj
>
Got it, thanks.
Regards,
Zheyun Shen
Powered by blists - more mailing lists