[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <28b3a004-b951-72fb-35fe-1f58673e6e93@arm.com>
Date: Mon, 21 Oct 2019 12:00:49 +0100
From: Steven Price <steven.price@....com>
To: Marc Zyngier <maz@...nel.org>
Cc: Mark Rutland <mark.rutland@....com>, kvm@...r.kernel.org,
Radim Krčmář <rkrcmar@...hat.com>,
Catalin Marinas <catalin.marinas@....com>,
Suzuki K Pouloze <suzuki.poulose@....com>,
linux-doc@...r.kernel.org, Russell King <linux@...linux.org.uk>,
linux-kernel@...r.kernel.org, James Morse <james.morse@....com>,
Julien Thierry <julien.thierry.kdev@...il.com>,
Paolo Bonzini <pbonzini@...hat.com>,
Will Deacon <will@...nel.org>, kvmarm@...ts.cs.columbia.edu,
linux-arm-kernel@...ts.infradead.org
Subject: Re: [PATCH v6 07/10] KVM: arm64: Provide VCPU attributes for stolen
time
On 19/10/2019 12:28, Marc Zyngier wrote:
> On Fri, 11 Oct 2019 13:59:27 +0100,
> Steven Price <steven.price@....com> wrote:
>>
>> Allow user space to inform the KVM host where in the physical memory
>> map the paravirtualized time structures should be located.
>>
>> User space can set an attribute on the VCPU providing the IPA base
>> address of the stolen time structure for that VCPU. This must be
>> repeated for every VCPU in the VM.
>>
>> The address is given in terms of the physical address visible to
>> the guest and must be 64 byte aligned. The guest will discover the
>> address via a hypercall.
>>
>> Signed-off-by: Steven Price <steven.price@....com>
>> ---
>> arch/arm64/include/asm/kvm_host.h | 7 +++++
>> arch/arm64/include/uapi/asm/kvm.h | 2 ++
>> arch/arm64/kvm/guest.c | 9 ++++++
>> include/uapi/linux/kvm.h | 2 ++
>> virt/kvm/arm/pvtime.c | 47 +++++++++++++++++++++++++++++++
>> 5 files changed, 67 insertions(+)
>>
>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
>> index 1697e63f6dd8..6af16b29a41f 100644
>> --- a/arch/arm64/include/asm/kvm_host.h
>> +++ b/arch/arm64/include/asm/kvm_host.h
>> @@ -489,6 +489,13 @@ long kvm_hypercall_pv_features(struct kvm_vcpu *vcpu);
>> long kvm_hypercall_stolen_time(struct kvm_vcpu *vcpu);
>> int kvm_update_stolen_time(struct kvm_vcpu *vcpu, bool init);
>>
>> +int kvm_arm_pvtime_set_attr(struct kvm_vcpu *vcpu,
>> + struct kvm_device_attr *attr);
>> +int kvm_arm_pvtime_get_attr(struct kvm_vcpu *vcpu,
>> + struct kvm_device_attr *attr);
>> +int kvm_arm_pvtime_has_attr(struct kvm_vcpu *vcpu,
>> + struct kvm_device_attr *attr);
>> +
>> static inline void kvm_arm_pvtime_vcpu_init(struct kvm_vcpu_arch *vcpu_arch)
>> {
>> vcpu_arch->steal.base = GPA_INVALID;
>> diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
>> index 67c21f9bdbad..cff1ba12c768 100644
>> --- a/arch/arm64/include/uapi/asm/kvm.h
>> +++ b/arch/arm64/include/uapi/asm/kvm.h
>> @@ -323,6 +323,8 @@ struct kvm_vcpu_events {
>> #define KVM_ARM_VCPU_TIMER_CTRL 1
>> #define KVM_ARM_VCPU_TIMER_IRQ_VTIMER 0
>> #define KVM_ARM_VCPU_TIMER_IRQ_PTIMER 1
>> +#define KVM_ARM_VCPU_PVTIME_CTRL 2
>> +#define KVM_ARM_VCPU_PVTIME_IPA 0
>>
>> /* KVM_IRQ_LINE irq field index values */
>> #define KVM_ARM_IRQ_VCPU2_SHIFT 28
>> diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
>> index dfd626447482..d3ac9d2fd405 100644
>> --- a/arch/arm64/kvm/guest.c
>> +++ b/arch/arm64/kvm/guest.c
>> @@ -858,6 +858,9 @@ int kvm_arm_vcpu_arch_set_attr(struct kvm_vcpu *vcpu,
>> case KVM_ARM_VCPU_TIMER_CTRL:
>> ret = kvm_arm_timer_set_attr(vcpu, attr);
>> break;
>> + case KVM_ARM_VCPU_PVTIME_CTRL:
>> + ret = kvm_arm_pvtime_set_attr(vcpu, attr);
>> + break;
>> default:
>> ret = -ENXIO;
>> break;
>> @@ -878,6 +881,9 @@ int kvm_arm_vcpu_arch_get_attr(struct kvm_vcpu *vcpu,
>> case KVM_ARM_VCPU_TIMER_CTRL:
>> ret = kvm_arm_timer_get_attr(vcpu, attr);
>> break;
>> + case KVM_ARM_VCPU_PVTIME_CTRL:
>> + ret = kvm_arm_pvtime_get_attr(vcpu, attr);
>> + break;
>> default:
>> ret = -ENXIO;
>> break;
>> @@ -898,6 +904,9 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
>> case KVM_ARM_VCPU_TIMER_CTRL:
>> ret = kvm_arm_timer_has_attr(vcpu, attr);
>> break;
>> + case KVM_ARM_VCPU_PVTIME_CTRL:
>> + ret = kvm_arm_pvtime_has_attr(vcpu, attr);
>> + break;
>> default:
>> ret = -ENXIO;
>> break;
>> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
>> index 52641d8ca9e8..a540c8357049 100644
>> --- a/include/uapi/linux/kvm.h
>> +++ b/include/uapi/linux/kvm.h
>> @@ -1227,6 +1227,8 @@ enum kvm_device_type {
>> #define KVM_DEV_TYPE_ARM_VGIC_ITS KVM_DEV_TYPE_ARM_VGIC_ITS
>> KVM_DEV_TYPE_XIVE,
>> #define KVM_DEV_TYPE_XIVE KVM_DEV_TYPE_XIVE
>> + KVM_DEV_TYPE_ARM_PV_TIME,
>> +#define KVM_DEV_TYPE_ARM_PV_TIME KVM_DEV_TYPE_ARM_PV_TIME
>> KVM_DEV_TYPE_MAX,
>> };
>>
>> diff --git a/virt/kvm/arm/pvtime.c b/virt/kvm/arm/pvtime.c
>> index a90f1b4ebd13..9dc466861e1e 100644
>> --- a/virt/kvm/arm/pvtime.c
>> +++ b/virt/kvm/arm/pvtime.c
>> @@ -2,7 +2,9 @@
>> // Copyright (C) 2019 Arm Ltd.
>>
>> #include <linux/arm-smccc.h>
>> +#include <linux/kvm_host.h>
>>
>> +#include <asm/kvm_mmu.h>
>> #include <asm/pvclock-abi.h>
>>
>> #include <kvm/arm_hypercalls.h>
>> @@ -75,3 +77,48 @@ long kvm_hypercall_stolen_time(struct kvm_vcpu *vcpu)
>>
>> return vcpu->arch.steal.base;
>> }
>> +
>> +int kvm_arm_pvtime_set_attr(struct kvm_vcpu *vcpu,
>> + struct kvm_device_attr *attr)
>> +{
>> + u64 __user *user = (u64 __user *)attr->addr;
>> + u64 ipa;
>> +
>> + if (attr->attr != KVM_ARM_VCPU_PVTIME_IPA)
>> + return -ENXIO;
>> +
>> + if (get_user(ipa, user))
>> + return -EFAULT;
>> + if (!IS_ALIGNED(ipa, 64))
>> + return -EINVAL;
>> + if (vcpu->arch.steal.base != GPA_INVALID)
>> + return -EEXIST;
>> + vcpu->arch.steal.base = ipa;
>
> And what if this IPA doesn't point to any memslot? I understand that
> everything will still work (kvm_put_user()) will handle the mishap,
> but it makes it hard for userspace to know that something is wrong.
>
> Is there any problem in mandating that the corresponding memslot
> already has been created, and enforcing this check?
No that could be done. As you mentioned nothing bad will happen (to the
host) if this is wrong, so I didn't see the need to enforce that the
memory is setup first. And the check will be pretty weak because nothing
stop the memslot vanishing after the check. But I guess this might make
it easier to figure out what has gone wrong in user space, and we can
always remove this ordering restriction in future if necessary. So I'll
add a check for now.
Thanks,
Steve
Powered by blists - more mailing lists