lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZNuQ0grC44Dbh5hS@google.com>
Date:   Tue, 15 Aug 2023 07:50:58 -0700
From:   Sean Christopherson <seanjc@...gle.com>
To:     Yan Zhao <yan.y.zhao@...el.com>
Cc:     bibo mao <maobibo@...ngson.cn>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
        pbonzini@...hat.com, mike.kravetz@...cle.com, apopple@...dia.com,
        jgg@...dia.com, rppt@...nel.org, akpm@...ux-foundation.org,
        kevin.tian@...el.com, david@...hat.com
Subject: Re: [RFC PATCH v2 5/5] KVM: Unmap pages only when it's indeed
 protected for NUMA migration

On Tue, Aug 15, 2023, Yan Zhao wrote:
> On Mon, Aug 14, 2023 at 09:40:44AM -0700, Sean Christopherson wrote:
> > > > Note, I'm assuming secondary MMUs aren't allowed to map swap entries...
> > > > 
> > > > Compile tested only.
> > > 
> > > I don't find a matching end to each
> > > mmu_notifier_invalidate_range_start_nonblock().
> > 
> > It pairs with existing call to mmu_notifier_invalidate_range_end() in change_pmd_range():
> > 
> > 	if (range.start)
> > 		mmu_notifier_invalidate_range_end(&range);
> No, It doesn't work for mmu_notifier_invalidate_range_start() sent in change_pte_range(),
> if we only want the range to include pages successfully set to PROT_NONE.

Precise invalidation was a non-goal for my hack-a-patch.  The intent was purely
to defer invalidation until it was actually needed, but still perform only a
single notification so as to batch the TLB flushes, e.g. the start() call still
used the original @end.

The idea was to play nice with the scenario where nothing in a VMA could be migrated.
It was complete untested though, so it may not have actually done anything to reduce
the number of pointless invalidations.

> > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> > index 9e4cd8b4a202..f29718a16211 100644
> > --- a/arch/x86/kvm/mmu/mmu.c
> > +++ b/arch/x86/kvm/mmu/mmu.c
> > @@ -4345,6 +4345,9 @@ static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault,
> >  	if (unlikely(!fault->slot))
> >  		return kvm_handle_noslot_fault(vcpu, fault, access);
> >  
> > +	if (mmu_invalidate_retry_hva(vcpu->kvm, fault->mmu_seq, fault->hva))
> > +		return RET_PF_RETRY;
> > +
> This can effectively reduce the remote flush IPIs a lot!
> One Nit is that, maybe rmb() or READ_ONCE() is required for kvm->mmu_invalidate_range_start
> and kvm->mmu_invalidate_range_end.
> Otherwise, I'm somewhat worried about constant false positive and retry.

If anything, this needs a READ_ONCE() on mmu_invalidate_in_progress.  The ranges
aren't touched when when mmu_invalidate_in_progress goes to zero, so ensuring they
are reloaded wouldn't do anything.  The key to making forward progress is seeing
that there is no in-progress invalidation.

I did consider adding said READ_ONCE(), but practically speaking, constant false
positives are impossible.  KVM will re-enter the guest when retrying, and there
is zero chance of the compiler avoiding reloads across VM-Enter+VM-Exit.

I suppose in theory we might someday differentiate between "retry because a different
vCPU may have fixed the fault" and "retry because there's an in-progress invalidation",
and not bother re-entering the guest for the latter, e.g. have it try to yield
instead.  

All that said, READ_ONCE() on mmu_invalidate_in_progress should effectively be a
nop, so it wouldn't hurt to be paranoid in this case.

Hmm, at that point, it probably makes sense to add a READ_ONCE() for mmu_invalidate_seq
too, e.g. so that a sufficiently clever compiler doesn't completely optimize away
the check.  Losing the check wouldn't be problematic (false negatives are fine,
especially on that particular check), but the generated code would *look* buggy.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ