[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5b786dde-1fc4-9abc-ae95-8360e033fb97@amazon.de>
Date: Wed, 8 May 2019 21:14:43 +0200
From: Jan H. Schönherr <jschoenh@...zon.de>
To: "Suthikulpanit, Suravee" <Suravee.Suthikulpanit@....com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"kvm@...r.kernel.org" <kvm@...r.kernel.org>
Cc: "joro@...tes.org" <joro@...tes.org>,
"pbonzini@...hat.com" <pbonzini@...hat.com>,
"rkrcmar@...hat.com" <rkrcmar@...hat.com>
Subject: Re: [PATCH 3/6] svm: Add support for APIC_ACCESS_PAGE_PRIVATE_MEMSLOT
setup/destroy
On 22/03/2019 12.57, Suthikulpanit, Suravee wrote:
> Activate/deactivate AVIC requires setting/unsetting the memory region used
> for APIC_ACCESS_PAGE_PRIVATE_MEMSLOT. So, re-factor avic_init_access_page()
> to avic_setup_access_page() and add srcu_read_lock/unlock, which are needed
> to allow this function to be called during run-time.
>
> Also, introduce avic_destroy_access_page() to unset the page when
> deactivate AVIC.
>
> Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@....com>
> ---
> arch/x86/kvm/svm.c | 28 ++++++++++++++++++++++++++--
> 1 file changed, 26 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index 4cf93a729ad8..f41f34f70dde 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -1666,7 +1666,7 @@ static u64 *avic_get_physical_id_entry(struct kvm_vcpu *vcpu,
> * field of the VMCB. Therefore, we set up the
> * APIC_ACCESS_PAGE_PRIVATE_MEMSLOT (4KB) here.
> */
> -static int avic_init_access_page(struct kvm_vcpu *vcpu)
> +static int avic_setup_access_page(struct kvm_vcpu *vcpu, bool init)
> {
> struct kvm *kvm = vcpu->kvm;
> int ret = 0;
> @@ -1675,10 +1675,14 @@ static int avic_init_access_page(struct kvm_vcpu *vcpu)
> if (kvm->arch.apic_access_page_done)
> goto out;
>
> + if (!init)
> + srcu_read_unlock(&kvm->srcu, vcpu->srcu_idx);
> ret = __x86_set_memory_region(kvm,
> APIC_ACCESS_PAGE_PRIVATE_MEMSLOT,
> APIC_DEFAULT_PHYS_BASE,
> PAGE_SIZE);
> + if (!init)
> + vcpu->srcu_idx = srcu_read_lock(&kvm->srcu);
> if (ret)
> goto out;
>
> @@ -1688,6 +1692,26 @@ static int avic_init_access_page(struct kvm_vcpu *vcpu)
> return ret;
> }
>
> +static void avic_destroy_access_page(struct kvm_vcpu *vcpu)
> +{
> + struct kvm *kvm = vcpu->kvm;
> +
> + mutex_lock(&kvm->slots_lock);
> +
> + if (!kvm->arch.apic_access_page_done)
> + goto out;
> +
> + srcu_read_unlock(&kvm->srcu, vcpu->srcu_idx);
> + __x86_set_memory_region(kvm,
> + APIC_ACCESS_PAGE_PRIVATE_MEMSLOT,
> + APIC_DEFAULT_PHYS_BASE,
> + 0);
> + vcpu->srcu_idx = srcu_read_lock(&kvm->srcu);
This pattern of "unlock, do something, re-lock" strikes me as odd --
here and in the setup function.
There seem to be a few assumptions for this to work:
a) SRCU read-side critical sections must not be nested.
b) We must not keep any pointer to a SRCU protected structure
across a call to this function.
Can we guarantee these assumptions? Now and in the future (given that this is already
a bit hidden in the call stack)?
(And if we can guarantee them, why are we holding the SRCU lock in the first place?)
Or is there maybe a nicer way to do this?
Regards
Jan
> + kvm->arch.apic_access_page_done = false;
> +out:
> + mutex_unlock(&kvm->slots_lock);
> +}
> +
> static int avic_init_backing_page(struct kvm_vcpu *vcpu)
> {
> int ret;
> @@ -1695,7 +1719,7 @@ static int avic_init_backing_page(struct kvm_vcpu *vcpu)
> int id = vcpu->vcpu_id;
> struct vcpu_svm *svm = to_svm(vcpu);
>
> - ret = avic_init_access_page(vcpu);
> + ret = avic_setup_access_page(vcpu, true);
> if (ret)
> return ret;
>
>
Powered by blists - more mailing lists