[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20250120120503.470533-1-szy0127@sjtu.edu.cn>
Date: Mon, 20 Jan 2025 20:05:00 +0800
From: Zheyun Shen <szy0127@...u.edu.cn>
To: thomas.lendacky@....com,
seanjc@...gle.com,
pbonzini@...hat.com,
tglx@...utronix.de,
kevinloughlin@...gle.com,
mingo@...hat.com,
bp@...en8.de
Cc: kvm@...r.kernel.org,
linux-kernel@...r.kernel.org,
Zheyun Shen <szy0127@...u.edu.cn>
Subject: [PATCH v5 0/3] KVM: SVM: Flush cache only on CPUs running SEV guest
Previous versions pointed out the problem of wbinvd_on_all_cpus() in SEV
and tried to maintain a cpumask to solve it. This version futher removes
unnecessary calls to wbinvd().
Although dirty_mask is not maintained perfectly and may lead to wbinvd on
physical CPUs that are not running a SEV guest, it's still better than
wbinvd_on_all_cpus(). And vcpu migration is designed to be solved in
future work.
---
v4 -> v5:
- Added a commit to remove unnecessary calls to wbinvd().
v3 -> v4:
- Added a wbinvd helper and export it to SEV.
- Changed the struct cpumask in kvm_sev_info into cpumask*, which should
be dynamically allocated and freed.
- Changed the time of recording the CPUs from pre_sev_run() to vcpu_load().
- Removed code of clearing the mask.
v2 -> v3:
- Replaced get_cpu() with parameter cpu in pre_sev_run().
v1 -> v2:
- Added sev_do_wbinvd() to wrap two operations.
- Used cpumask_test_and_clear_cpu() to avoid concurrent problems.
---
Zheyun Shen (3):
KVM: x86: Add a wbinvd helper
KVM: SVM: Remove wbinvd in sev_vm_destroy()
KVM: SVM: Flush cache only on CPUs running SEV guest
arch/x86/kvm/svm/sev.c | 45 +++++++++++++++++++++++++++++++++---------
arch/x86/kvm/svm/svm.c | 2 ++
arch/x86/kvm/svm/svm.h | 5 ++++-
arch/x86/kvm/x86.c | 9 +++++++--
arch/x86/kvm/x86.h | 1 +
5 files changed, 50 insertions(+), 12 deletions(-)
--
2.34.1
Powered by blists - more mailing lists