lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 26 Aug 2021 06:58:52 +0800
From:   Lai Jiangshan <jiangshanlai@...il.com>
To:     Sean Christopherson <seanjc@...gle.com>
Cc:     LKML <linux-kernel@...r.kernel.org>,
        Paolo Bonzini <pbonzini@...hat.com>,
        Lai Jiangshan <laijs@...ux.alibaba.com>,
        Vitaly Kuznetsov <vkuznets@...hat.com>,
        Wanpeng Li <wanpengli@...cent.com>,
        Jim Mattson <jmattson@...gle.com>,
        Joerg Roedel <joro@...tes.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
        X86 ML <x86@...nel.org>, "H. Peter Anvin" <hpa@...or.com>,
        kvm@...r.kernel.org
Subject: Re: [PATCH 7/7] KVM: X86: Also prefetch the last range in __direct_pte_prefetch().

On Wed, Aug 25, 2021 at 11:18 PM Sean Christopherson <seanjc@...gle.com> wrote:
>
> On Tue, Aug 24, 2021, Lai Jiangshan wrote:
> > From: Lai Jiangshan <laijs@...ux.alibaba.com>
> >
> > __direct_pte_prefetch() skips prefetching the last range.
> >
> > The last range are often the whole range after the faulted spte when
> > guest is touching huge-page-mapped(in guest view) memory forwardly
> > which means prefetching them can reduce pagefault.
> >
> > Signed-off-by: Lai Jiangshan <laijs@...ux.alibaba.com>
> > ---
> >  arch/x86/kvm/mmu/mmu.c | 5 +++--
> >  1 file changed, 3 insertions(+), 2 deletions(-)
> >
> > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> > index e5932af6f11c..ac260e01e9d8 100644
> > --- a/arch/x86/kvm/mmu/mmu.c
> > +++ b/arch/x86/kvm/mmu/mmu.c
> > @@ -2847,8 +2847,9 @@ static void __direct_pte_prefetch(struct kvm_vcpu *vcpu,
> >       i = (sptep - sp->spt) & ~(PTE_PREFETCH_NUM - 1);
> >       spte = sp->spt + i;
> >
> > -     for (i = 0; i < PTE_PREFETCH_NUM; i++, spte++) {
> > -             if (is_shadow_present_pte(*spte) || spte == sptep) {
> > +     for (i = 0; i <= PTE_PREFETCH_NUM; i++, spte++) {
> > +             if (i == PTE_PREFETCH_NUM ||
> > +                 is_shadow_present_pte(*spte) || spte == sptep) {
>
> Heh, I posted a fix just a few days ago.  I prefer having a separate call after
> the loop.  The "<= PTE_PREFETCH_NUM" is subtle, and a check at the ends avoids
> a CMP+Jcc in the loop, though I highly doubt that actually affects performance.
>
> https://lkml.kernel.org/r/20210818235615.2047588-1-seanjc@google.com

Thanks!

>
> >                       if (!start)
> >                               continue;
> >                       if (direct_pte_prefetch_many(vcpu, sp, start, spte) < 0)
> > --
> > 2.19.1.6.gb485710b
> >

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ