lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 22 Oct 2021 17:56:54 +0300
From:   Maxim Levitsky <mlevitsk@...hat.com>
To:     Paolo Bonzini <pbonzini@...hat.com>,
        Sean Christopherson <seanjc@...gle.com>
Cc:     Vitaly Kuznetsov <vkuznets@...hat.com>,
        Wanpeng Li <wanpengli@...cent.com>,
        Jim Mattson <jmattson@...gle.com>,
        Joerg Roedel <joro@...tes.org>, kvm@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 0/4] KVM: x86: APICv cleanups

On Fri, 2021-10-22 at 12:12 +0200, Paolo Bonzini wrote:
> On 22/10/21 02:49, Sean Christopherson wrote:
> > APICv cleanups and a dissertation on handling concurrent APIC access page
> > faults and APICv inhibit updates.
> > 
> > I've tested this but haven't hammered the AVIC stuff, I'd appreciate it if
> > someone with the Hyper-V setup can beat on the AVIC toggling.
> > 
> > Sean Christopherson (4):
> >    KVM: x86/mmu: Use vCPU's APICv status when handling APIC_ACCESS
> >      memslot
> >    KVM: x86: Move SVM's APICv sanity check to common x86
> >    KVM: x86: Move apicv_active flag from vCPU to in-kernel local APIC
> >    KVM: x86: Use rw_semaphore for APICv lock to allow vCPU parallelism
> > 
> >   arch/x86/include/asm/kvm_host.h |  3 +-
> >   arch/x86/kvm/hyperv.c           |  4 +--
> >   arch/x86/kvm/lapic.c            | 46 ++++++++++---------------
> >   arch/x86/kvm/lapic.h            |  5 +--
> >   arch/x86/kvm/mmu/mmu.c          | 29 ++++++++++++++--
> >   arch/x86/kvm/svm/avic.c         |  2 +-
> >   arch/x86/kvm/svm/svm.c          |  2 --
> >   arch/x86/kvm/vmx/vmx.c          |  4 +--
> >   arch/x86/kvm/x86.c              | 59 ++++++++++++++++++++++-----------
> >   9 files changed, 93 insertions(+), 61 deletions(-)
> > 
> 
> Queued, thanks.  I only made small edits to the comment in patch
> 1, to make it very slightly shorter.
> 
> 	 * 2a. APICv is globally disabled but locally enabled, and this
> 	 *     vCPU acquires mmu_lock before __kvm_request_apicv_update
> 	 *     calls kvm_zap_gfn_range().  This vCPU will install a stale
> 	 *     SPTE, but no one will consume it as (a) no vCPUs can be
> 	 *     running due to the kick from KVM_REQ_APICV_UPDATE, and
> 	 *     (b) because KVM_REQ_APICV_UPDATE is raised before the VM
> 	 *     state is update, vCPUs attempting to service the request
> 	 *     will block on apicv_update_lock.  The update flow will
> 	 *     then zap the SPTE and release the lock.
> 
> Paolo
> 

Hi Paolo and Sean!

Could you expalain to me why the scenario when  I expalined about in my reply previous version of patch 1
is not correct?

This is the scenario I was worried about:



    vCPU0                                   vCPU1
    =====                                   =====

- disable AVIC
- VMRUN
                                        - #NPT on AVIC MMIO access
                                        - *stuck on something prior to the page fault code*
- enable AVIC
- VMRUN
                                        - *still stuck on something prior to the page fault code*

- disable AVIC:

  - raise KVM_REQ_APICV_UPDATE request
                                        
  - set global avic state to disable

  - zap the SPTE (does nothing, doesn't race
        with anything either)

  - handle KVM_REQ_APICV_UPDATE -
    - disable vCPU0 AVIC

- VMRUN
                                        - *still stuck on something prior to the page fault code*

                                                            ...
                                                            ...
                                                            ...

                                        - now vCPU1 finally starts running the page fault code.

                                        - vCPU1 AVIC is still enabled 
                                          (because vCPU1 never handled KVM_REQ_APICV_UPDATE),
                                          so the page fault code will populate the SPTE.
                                          

                                        - handle KVM_REQ_APICV_UPDATE
                                           - finally disable vCPU1 AVIC

                                        - VMRUN (vCPU1 AVIC disabled, SPTE populated)

                                                         ***boom***



Best regards,
	Maxim Levitsky

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ