[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YhkRcK64Jya6YpA9@google.com>
Date: Fri, 25 Feb 2022 17:27:12 +0000
From: Sean Christopherson <seanjc@...gle.com>
To: David Woodhouse <dwmw2@...radead.org>
Cc: "borntraeger@...ux.ibm.com" <borntraeger@...ux.ibm.com>,
"pbonzini@...hat.com" <pbonzini@...hat.com>,
"frankja@...ux.ibm.com" <frankja@...ux.ibm.com>,
"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"imbrenda@...ux.ibm.com" <imbrenda@...ux.ibm.com>,
"david@...hat.com" <david@...hat.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [EXTERNAL] [PATCH v2] KVM: Don't actually set a request when
evicting vCPUs for GFN cache invd
On Fri, Feb 25, 2022, David Woodhouse wrote:
> On Fri, 2022-02-25 at 16:13 +0000, Sean Christopherson wrote:
> > On Fri, Feb 25, 2022, Woodhouse, David wrote:
> > > Since we need an active vCPU context to do dirty logging (thanks, dirty
> > > ring)... and since any time vcpu_run exits to userspace for any reason
> > > might be the last time we ever get an active vCPU context... I think
> > > that kind of fundamentally means that we must flush dirty state to the
> > > log on *every* return to userspace, doesn't it?
> >
> > I would rather add a variant of mark_page_dirty_in_slot() that takes a vCPU, which
> > we whould have in all cases. I see no reason to require use of kvm_get_running_vcpu().
>
> We already have kvm_vcpu_mark_page_dirty(), but it can't use just 'some
> vcpu' because the dirty ring is lockless. So if you're ever going to
> use anything other than kvm_get_running_vcpu() we need to add locks.
Heh, actually, scratch my previous comment. I was going to respond that
kvm_get_running_vcpu() is mutually exclusive with all other ioctls() on the same
vCPU by virtue of vcpu->mutex, but I had forgotten that kvm_get_running_vcpu()
really should be "kvm_get_loaded_vcpu()". I.e. as long as KVM is in a vCPU-ioctl
path, kvm_get_running_vcpu() will be non-null.
> And while we *could* do that, I don't think it would negate the
> fundamental observation that *any* time we return from vcpu_run to
> userspace, that could be the last time. Userspace might read the dirty
> log for the *last* time, and any internally-cached "oh, at some point
> we need to mark <this> page dirty" is lost because by the time the vCPU
> is finally destroyed, it's too late.
Hmm, isn't that an existing bug? I think the correct fix would be to flush all
dirty vmcs12 pages to the memslot in vmx_get_nested_state(). Userspace _must_
invoke that if it wants to migrated a nested vCPU.
> I think I'm going to rip out the 'dirty' flag from the gfn_to_pfn_cache
> completely and add a function (to be called with an active vCPU
> context) which marks the page dirty *now*.
Hrm, something like?
1. Drop @dirty from kvm_gfn_to_pfn_cache_init()
2. Rename @dirty => @old_dirty in kvm_gfn_to_pfn_cache_refresh()
3. Add an API to mark the associated slot dirty without unmapping
I think that makes sense.
> KVM_GUEST_USES_PFN users like nested VMX will be expected to do this
> before returning from vcpu_run anytime it's in L2 guest mode.
As above, I think the correct thing to do is enlightent the flows that retrieve
the state being cached.
Powered by blists - more mailing lists