[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a9d3301c-4c2f-9624-dc52-1033b940ef06@redhat.com>
Date: Wed, 4 Dec 2019 10:42:27 +0100
From: Paolo Bonzini <pbonzini@...hat.com>
To: Sean Christopherson <sean.j.christopherson@...el.com>,
Peter Xu <peterx@...hat.com>
Cc: linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
"Dr . David Alan Gilbert" <dgilbert@...hat.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>
Subject: Re: [PATCH RFC 01/15] KVM: Move running VCPU from ARM to common code
On 03/12/19 20:01, Sean Christopherson wrote:
> In case it was clear, I strongly dislike adding kvm_get_running_vcpu().
> IMO, it's a unnecessary hack. The proper change to ensure a valid vCPU is
> seen by mark_page_dirty_in_ring() when there is a current vCPU is to
> plumb the vCPU down through the various call stacks. Looking up the call
> stacks for mark_page_dirty() and mark_page_dirty_in_slot(), they all
> originate with a vcpu->kvm within a few functions, except for the rare
> case where the write is coming from a non-vcpu ioctl(), in which case
> there is no current vCPU.
>
> The proper change is obviously much bigger in scope and would require
> touching gobs of arch specific code, but IMO the end result would be worth
> the effort. E.g. there's a decent chance it would reduce the API between
> common KVM and arch specific code by eliminating the exports of variants
> that take "struct kvm *" instead of "struct kvm_vcpu *".
It's not that simple. In some cases, the "struct kvm *" cannot be
easily replaced with a "struct kvm_vcpu *" without making the API less
intuitive; for example think of a function that takes a kvm_vcpu pointer
but then calls gfn_to_hva(vcpu->kvm) instead of the expected
kvm_vcpu_gfn_to_hva(vcpu).
That said, looking at the code again after a couple years I agree that
the usage of kvm_get_running_vcpu() is ugly. But I don't think it's
kvm_get_running_vcpu()'s fault, rather it's the vCPU argument in
mark_page_dirty_in_slot and mark_page_dirty_in_ring that is confusing
and we should not be adding.
kvm_get_running_vcpu() basically means "you can use the per-vCPU ring
and avoid locking", nothing more. Right now we need the vCPU argument
in mark_page_dirty_in_ring for kvm_arch_vcpu_memslots_id(vcpu), but that
is unnecessary and is the real source of confusion (possibly bugs too)
if it gets out of sync.
Instead, let's add an as_id field to struct kvm_memory_slot (which is
trivial to initialize in __kvm_set_memory_region), and just do
as_id = slot->as_id;
vcpu = kvm_get_running_vcpu();
in mark_page_dirty_in_ring.
Paolo
Powered by blists - more mailing lists