[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191203190126.GC19877@linux.intel.com>
Date: Tue, 3 Dec 2019 11:01:26 -0800
From: Sean Christopherson <sean.j.christopherson@...el.com>
To: Peter Xu <peterx@...hat.com>
Cc: linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
Paolo Bonzini <pbonzini@...hat.com>,
"Dr . David Alan Gilbert" <dgilbert@...hat.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>
Subject: Re: [PATCH RFC 01/15] KVM: Move running VCPU from ARM to common code
On Fri, Nov 29, 2019 at 04:34:51PM -0500, Peter Xu wrote:
> From: Paolo Bonzini <pbonzini@...hat.com>
>
> For ring-based dirty log tracking, it will be more efficient to account
> writes during schedule-out or schedule-in to the currently running VCPU.
> We would like to do it even if the write doesn't use the current VCPU's
> address space, as is the case for cached writes (see commit 4e335d9e7ddb,
> "Revert "KVM: Support vCPU-based gfn->hva cache"", 2017-05-02).
>
> Therefore, add a mechanism to track the currently-loaded kvm_vcpu struct.
> There is already something similar in KVM/ARM; one important difference
> is that kvm_arch_vcpu_{load,put} have two callers in virt/kvm/kvm_main.c:
> we have to update both the architecture-independent vcpu_{load,put} and
> the preempt notifiers.
>
> Another change made in the process is to allow using kvm_get_running_vcpu()
> in preemptible code. This is allowed because preempt notifiers ensure
> that the value does not change even after the VCPU thread is migrated.
In case it was clear, I strongly dislike adding kvm_get_running_vcpu().
IMO, it's a unnecessary hack. The proper change to ensure a valid vCPU is
seen by mark_page_dirty_in_ring() when there is a current vCPU is to
plumb the vCPU down through the various call stacks. Looking up the call
stacks for mark_page_dirty() and mark_page_dirty_in_slot(), they all
originate with a vcpu->kvm within a few functions, except for the rare
case where the write is coming from a non-vcpu ioctl(), in which case
there is no current vCPU.
The proper change is obviously much bigger in scope and would require
touching gobs of arch specific code, but IMO the end result would be worth
the effort. E.g. there's a decent chance it would reduce the API between
common KVM and arch specific code by eliminating the exports of variants
that take "struct kvm *" instead of "struct kvm_vcpu *".
Powered by blists - more mailing lists