[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YQlz4YDu/W8+YsZl@google.com>
Date: Tue, 3 Aug 2021 16:50:41 +0000
From: Sean Christopherson <seanjc@...gle.com>
To: Mingwei Zhang <mizhang@...gle.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org,
Tom Lendacky <thomas.lendacky@....com>,
Marc Orr <marcorr@...gle.com>,
David Rientjes <rientjes@...gle.com>,
Alper Gun <alpergun@...gle.com>,
Dionna Glaze <dionnaglaze@...gle.com>,
Vipin Sharma <vipinsh@...gle.com>,
Peter Gonda <pgonda@...gle.com>,
Brijesh Singh <brijesh.singh@....com>
Subject: Re: [PATCH v2] KVM: SVM: improve the code readability for ASID
management
+Tom and Brijesh
On Mon, Aug 02, 2021, Mingwei Zhang wrote:
> KVM SEV code uses bitmaps to manage ASID states. ASID 0 was always skipped
> because it is never used by VM. Thus, in existing code, ASID value and its
> bitmap postion always has an 'offset-by-1' relationship.
>
> Both SEV and SEV-ES shares the ASID space, thus KVM uses a dynamic range
> [min_asid, max_asid] to handle SEV and SEV-ES ASIDs separately.
>
> Existing code mixes the usage of ASID value and its bitmap position by
> using the same variable called 'min_asid'.
>
> Fix the min_asid usage: ensure that its usage is consistent with its name;
> allocate extra size for ASID 0 to ensure that each ASID has the same value
> with its bitmap position. Add comments on ASID bitmap allocation to clarify
> the size change.
>
> v1 -> v2:
> - change ASID bitmap size to incorporate ASID 0 [sean]
> - remove the 'fixes' line in commit message. [sean/joerg]
>
> Signed-off-by: Mingwei Zhang <mizhang@...gle.com>
> Cc: Tom Lendacky <thomas.lendacky@....com>
> Cc: Marc Orr <marcorr@...gle.com>
> Cc: David Rientjes <rientjes@...gle.com>
> Cc: Alper Gun <alpergun@...gle.com>
> Cc: Dionna Glaze <dionnaglaze@...gle.com>
> Cc: Sean Christopherson <seanjc@...gle.com>
> Cc: Vipin Sharma <vipinsh@...gle.com>
> Cc: Peter Gonda <pgonda@...gle.com>
> Cc: Joerg Roedel <joro@...tes.org>
> ---
...
> @@ -156,11 +157,11 @@ static int sev_asid_new(struct kvm_sev_info *sev)
> goto e_uncharge;
> }
>
> - __set_bit(pos, sev_asid_bitmap);
> + __set_bit(asid, sev_asid_bitmap);
This patch missed sev_asid_free().
And on a very related topic, I'm pretty sure the VMCB+ASID invalidation logic
indexes sev_vmcbs incorrectly. pre_sev_run() indexes sev_vmcbs by the ASID,
whereas sev_asid_free() indexes by ASID-1, i.e. on free KVM nullifies the wrong
sev_vmcb entry. sev_cpu_init() allocates for max_sev_asid+1, so indexing by
ASID appears to be the intended behavior. That code is also a good candidate for
conversion to nr_asids in this patch.
For the sev_vmcbs bug:
>From 78c334121adaa51a2a84942ec65dceefd3752590 Mon Sep 17 00:00:00 2001
From: Sean Christopherson <seanjc@...gle.com>
Date: Tue, 3 Aug 2021 09:27:46 -0700
Subject: [PATCH] KVM: SVM: Fix off-by-one indexing when nullifying last used
SEV VMCB
Use the raw ASID, not ASID-1, when nullifying the last used VMCB when
freeing an SEV ASID. The consumer, pre_sev_run(), indexes the array by
the raw ASID, thus KVM could get a false negative when checking for a
different VMCB if KVM manages to reallocate the same ASID+VMCB combo for
a new VM.
Note, this cannot cause a functional issue _in the current code_, as
pre_sev_run() also checks which pCPU last did VMRUN for the vCPU, and
last_vmentry_cpu is initialized to -1 during vCPU creation, i.e. is
guaranteed to mismatch on the first VMRUN. However, prior to commit
8a14fe4f0c54 ("kvm: x86: Move last_cpu into kvm_vcpu_arch as
last_vmentry_cpu"), SVM tracked pCPU on its own and zero-initialized the
last_cpu variable. Thus it's theoretically possible that older versions
of KVM could miss a TLB flush if the first VMRUN is on pCPU0 and the ASID
and VMCB exactly match those of a prior VM.
Fixes: 70cd94e60c73 ("KVM: SVM: VMRUN should use associated ASID when SEV is enabled")
Cc: Tom Lendacky <thomas.lendacky@....com>
Cc: Brijesh Singh <brijesh.singh@....com>
Cc: stable@...r.kernel.org
Signed-off-by: Sean Christopherson <seanjc@...gle.com>
---
arch/x86/kvm/svm/sev.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 9f1585f40c85..f4f5d554eaaa 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -189,7 +189,7 @@ static void sev_asid_free(struct kvm_sev_info *sev)
for_each_possible_cpu(cpu) {
sd = per_cpu(svm_data, cpu);
- sd->sev_vmcbs[pos] = NULL;
+ sd->sev_vmcbs[sev->asid] = NULL;
}
mutex_unlock(&sev_bitmap_lock);
--
2.32.0.554.ge1b32706d8-goog
And fixup for this patch:
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index f3df1eba0c72..416ae0b687fc 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -180,13 +180,12 @@ static int sev_get_asid(struct kvm *kvm)
static void sev_asid_free(struct kvm_sev_info *sev)
{
struct svm_cpu_data *sd;
- int cpu, pos;
+ int cpu;
enum misc_res_type type;
mutex_lock(&sev_bitmap_lock);
- pos = sev->asid - 1;
- __set_bit(pos, sev_reclaim_asid_bitmap);
+ __set_bit(sev->asid, sev_reclaim_asid_bitmap);
for_each_possible_cpu(cpu) {
sd = per_cpu(svm_data, cpu);
@@ -1928,7 +1927,7 @@ int sev_cpu_init(struct svm_cpu_data *sd)
if (!sev_enabled)
return 0;
- sd->sev_vmcbs = kcalloc(max_sev_asid + 1, sizeof(void *), GFP_KERNEL);
+ sd->sev_vmcbs = kcalloc(nr_asids, sizeof(void *), GFP_KERNEL);
if (!sd->sev_vmcbs)
return -ENOMEM;
--
Powered by blists - more mailing lists