lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAL715WJowYL=W40SWmtPoz1F9WVBFDG7TQwbsV2Bwf9-cS77=Q@mail.gmail.com>
Date:   Mon, 5 Jun 2023 10:42:18 -0700
From:   Mingwei Zhang <mizhang@...gle.com>
To:     Jim Mattson <jmattson@...gle.com>
Cc:     Sean Christopherson <seanjc@...gle.com>,
        Paolo Bonzini <pbonzini@...hat.com>,
        "H. Peter Anvin" <hpa@...or.com>, kvm@...r.kernel.org,
        linux-kernel@...r.kernel.org, Ben Gardon <bgardon@...gle.com>
Subject: Re: [PATCH] KVM: x86/mmu: Remove KVM MMU write lock when accessing indirect_shadow_pages

On Mon, Jun 5, 2023 at 9:55 AM Jim Mattson <jmattson@...gle.com> wrote:
>
> On Sun, Jun 4, 2023 at 5:43 PM Mingwei Zhang <mizhang@...gle.com> wrote:
> >
> > Remove KVM MMU write lock when accessing indirect_shadow_pages counter when
> > page role is direct because this counter value is used as a coarse-grained
> > heuristics to check if there is nested guest active. Racing with this
> > heuristics without mmu lock will be harmless because the corresponding
> > indirect shadow sptes for the GPA will either be zapped by this thread or
> > some other thread who has previously zapped all indirect shadow pages and
> > makes the value to 0.
> >
> > Because of that, remove the KVM MMU write lock pair to potentially reduce
> > the lock contension and improve the performance of nested VM. In addition
> > opportunistically change the comment of 'direct mmu' to make the
> > description consistent with other places.
> >
> > Reported-by: Jim Mattson <jmattson@...gle.com>
> > Signed-off-by: Mingwei Zhang <mizhang@...gle.com>
> > ---
> >  arch/x86/kvm/x86.c | 10 ++--------
> >  1 file changed, 2 insertions(+), 8 deletions(-)
> >
> > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> > index 5ad55ef71433..97cfa5a00ff2 100644
> > --- a/arch/x86/kvm/x86.c
> > +++ b/arch/x86/kvm/x86.c
> > @@ -8585,15 +8585,9 @@ static bool reexecute_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
> >
> >         kvm_release_pfn_clean(pfn);
> >
> > -       /* The instructions are well-emulated on direct mmu. */
> > +       /* The instructions are well-emulated on Direct MMUs. */
> >         if (vcpu->arch.mmu->root_role.direct) {
> > -               unsigned int indirect_shadow_pages;
> > -
> > -               write_lock(&vcpu->kvm->mmu_lock);
> > -               indirect_shadow_pages = vcpu->kvm->arch.indirect_shadow_pages;
> > -               write_unlock(&vcpu->kvm->mmu_lock);
> > -
> > -               if (indirect_shadow_pages)
> > +               if (READ_ONCE(vcpu->kvm->arch.indirect_shadow_pages))
>
> I don't understand the need for READ_ONCE() here. That implies that
> there is something tricky going on, and I don't think that's the case.

READ_ONCE() is just telling the compiler not to remove the read. Since
this is reading a global variable,  the compiler might just read a
previous copy if the value has already been read into a local
variable. But that is not the case here...

Note I see there is another READ_ONCE for
kvm->arch.indirect_shadow_pages, so I am reusing the same thing.

I did check the reordering issue but it should be fine because when
'we' see indirect_shadow_pages as 0, the shadow pages must have
already been zapped. Not only because of the locking, but also the
program order in __kvm_mmu_prepare_zap_page() shows that it will zap
shadow pages first before updating the stats.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ