lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <Y3LmloAFnNpHDumV@yzhao56-desk.sh.intel.com>
Date:   Tue, 15 Nov 2022 09:08:38 +0800
From:   Yan Zhao <yan.y.zhao@...el.com>
To:     Sean Christopherson <seanjc@...gle.com>
CC:     <kvm@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
        <pbonzini@...hat.com>, <intel-gfx@...ts.freedesktop.org>,
        <intel-gvt-dev@...ts.freedesktop.org>, <zhenyuw@...ux.intel.com>
Subject: Re: [PATCH v2 1/3] KVM: x86: add a new page track hook
 track_remove_slot

On Tue, Nov 15, 2022 at 12:55:42AM +0000, Sean Christopherson wrote:
> On Tue, Nov 15, 2022, Yan Zhao wrote:
> > On Mon, Nov 14, 2022 at 11:24:16PM +0000, Sean Christopherson wrote:
> > > On Tue, Nov 15, 2022, Yan Zhao wrote:
> > > > On Mon, Nov 14, 2022 at 04:32:34PM +0000, Sean Christopherson wrote:
> > > > > On Mon, Nov 14, 2022, Yan Zhao wrote:
> > > > > > On Sat, Nov 12, 2022 at 12:43:07AM +0000, Sean Christopherson wrote:
> > > > > > > On Sat, Nov 12, 2022, Yan Zhao wrote:
> > > > > > > > And I'm also not sure if a slots_arch_lock is required for
> > > > > > > > kvm_slot_page_track_add_page() and kvm_slot_page_track_remove_page().
> > > > > > > 
> > > > > > > It's not required.  slots_arch_lock protects interaction between memslot updates
> > > > > > In kvm_slot_page_track_add_page() and kvm_slot_page_track_remove_page(),
> > > > > > slot->arch.gfn_track[mode][index] is updated in update_gfn_track(),
> > > > > > do you know which lock is used to protect it?
> > > > > 
> > > > > mmu_lock protects the count, kvm->srcu protects the slot, and shadow_root_allocated
> > > > > protects that validity of gfn_track, i.e. shadow_root_allocated ensures that KVM
> > > > > allocates gfn_track for all memslots when shadow paging is activated.
> > > > Hmm, thanks for the reply.
> > > > but in direct_page_fault(),
> > > > if (page_fault_handle_page_track(vcpu, fault))
> > > > 	return RET_PF_EMULATE;
> > > > 
> > > > slot->arch.gfn_track is read without any mmu_lock is held.
> > > 
> > > That's a fast path that deliberately reads out of mmu_lock.  A false positive
> > > only results in unnecessary emulation, and any false positive is inherently prone
> > > to races anyways, e.g. fault racing with zap.
> > what about false negative?
> > If the fast path read 0 count, no page track write callback will be called and write
> > protection will be removed in the slow path.
> 
> No.  For a false negative to occur, a different task would have to create a SPTE
> and write-protect the GFN _while holding mmu_lock_.  And then after acquiring
> mmu_lock, the vCPU that got the false negative would call make_spte(), which would
> detect that making the SPTE writable is disallowed due to the GFN being write-protected.
> 
> 	if (pte_access & ACC_WRITE_MASK) {
> 		spte |= PT_WRITABLE_MASK | shadow_mmu_writable_mask;
> 
> 		/*
> 		 * Optimization: for pte sync, if spte was writable the hash
> 		 * lookup is unnecessary (and expensive). Write protection
> 		 * is responsibility of kvm_mmu_get_page / kvm_mmu_sync_roots.
> 		 * Same reasoning can be applied to dirty page accounting.
> 		 */
> 		if (is_writable_pte(old_spte))
> 			goto out;
> 
> 		/*
> 		 * Unsync shadow pages that are reachable by the new, writable
> 		 * SPTE.  Write-protect the SPTE if the page can't be unsync'd,
> 		 * e.g. it's write-tracked (upper-level SPs) or has one or more
> 		 * shadow pages and unsync'ing pages is not allowed.
> 		 */
> 		if (mmu_try_to_unsync_pages(vcpu->kvm, slot, gfn, can_unsync, prefetch)) {
> 			pgprintk("%s: found shadow page for %llx, marking ro\n",
> 				 __func__, gfn);
> 			wrprot = true;
> 			pte_access &= ~ACC_WRITE_MASK;
> 			spte &= ~(PT_WRITABLE_MASK | shadow_mmu_writable_mask);
> 		}
> 	}
> 
> 
> 
> int mmu_try_to_unsync_pages(struct kvm *kvm, const struct kvm_memory_slot *slot,
> 			    gfn_t gfn, bool can_unsync, bool prefetch)
> {
> 	struct kvm_mmu_page *sp;
> 	bool locked = false;
> 
> 	/*
> 	 * Force write-protection if the page is being tracked.  Note, the page
> 	 * track machinery is used to write-protect upper-level shadow pages,
> 	 * i.e. this guards the role.level == 4K assertion below!
> 	 */
> 	if (kvm_slot_page_track_is_active(kvm, slot, gfn, KVM_PAGE_TRACK_WRITE))
> 		return -EPERM;
> 
> 	...
> }

Oh, you are right! I thought mmu_try_to_unsync_pages() is only for the
shadow mmu, and overlooked that TDP MMU will also go into it.

Thanks for the detailed explanation.

Thanks
Yan

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ