lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANRm+CxiN0DPJMCoYzeQ5FMCfw8Cyp0CvGftFs68dz+-rrTCiw@mail.gmail.com>
Date:   Wed, 31 Mar 2021 09:41:08 +0800
From:   Wanpeng Li <kernellwp@...il.com>
To:     Paolo Bonzini <pbonzini@...hat.com>
Cc:     LKML <linux-kernel@...r.kernel.org>, kvm <kvm@...r.kernel.org>,
        Marcelo Tosatti <mtosatti@...hat.com>,
        Vitaly Kuznetsov <vkuznets@...hat.com>,
        David Woodhouse <dwmw@...zon.co.uk>
Subject: Re: [PATCH 1/2] KVM: x86: reduce pvclock_gtod_sync_lock critical sections

On Wed, 31 Mar 2021 at 01:02, Paolo Bonzini <pbonzini@...hat.com> wrote:
>
> There is no need to include changes to vcpu->requests into
> the pvclock_gtod_sync_lock critical section.  The changes to
> the shared data structures (in pvclock_update_vm_gtod_copy)
> already occur under the lock.
>
> Cc: David Woodhouse <dwmw@...zon.co.uk>
> Cc: Marcelo Tosatti <mtosatti@...hat.com>
> Signed-off-by: Paolo Bonzini <pbonzini@...hat.com>

Reviewed-by: Wanpeng Li <wanpengli@...cent.com>

> ---
>  arch/x86/kvm/x86.c | 10 ++++------
>  1 file changed, 4 insertions(+), 6 deletions(-)
>
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index fe806e894212..0a83eff40b43 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -2562,10 +2562,12 @@ static void kvm_gen_update_masterclock(struct kvm *kvm)
>
>         kvm_hv_invalidate_tsc_page(kvm);
>
> -       spin_lock(&ka->pvclock_gtod_sync_lock);
>         kvm_make_mclock_inprogress_request(kvm);
> +
>         /* no guest entries from this point */
> +       spin_lock(&ka->pvclock_gtod_sync_lock);
>         pvclock_update_vm_gtod_copy(kvm);
> +       spin_unlock(&ka->pvclock_gtod_sync_lock);
>
>         kvm_for_each_vcpu(i, vcpu, kvm)
>                 kvm_make_request(KVM_REQ_CLOCK_UPDATE, vcpu);
> @@ -2573,8 +2575,6 @@ static void kvm_gen_update_masterclock(struct kvm *kvm)
>         /* guest entries allowed */
>         kvm_for_each_vcpu(i, vcpu, kvm)
>                 kvm_clear_request(KVM_REQ_MCLOCK_INPROGRESS, vcpu);
> -
> -       spin_unlock(&ka->pvclock_gtod_sync_lock);
>  #endif
>  }
>
> @@ -7740,16 +7740,14 @@ static void kvm_hyperv_tsc_notifier(void)
>                 struct kvm_arch *ka = &kvm->arch;
>
>                 spin_lock(&ka->pvclock_gtod_sync_lock);
> -
>                 pvclock_update_vm_gtod_copy(kvm);
> +               spin_unlock(&ka->pvclock_gtod_sync_lock);
>
>                 kvm_for_each_vcpu(cpu, vcpu, kvm)
>                         kvm_make_request(KVM_REQ_CLOCK_UPDATE, vcpu);
>
>                 kvm_for_each_vcpu(cpu, vcpu, kvm)
>                         kvm_clear_request(KVM_REQ_MCLOCK_INPROGRESS, vcpu);
> -
> -               spin_unlock(&ka->pvclock_gtod_sync_lock);
>         }
>         mutex_unlock(&kvm_lock);
>  }
> --
> 2.26.2
>
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ