[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJHc60yhg7oYiJpHJK27M7=qo0CMOX+Qj9+q-ZHgTVhWr_76aA@mail.gmail.com>
Date: Fri, 10 Sep 2021 11:03:58 -0700
From: Raghavendra Rao Ananta <rananta@...gle.com>
To: Andrew Jones <drjones@...hat.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>, Marc Zyngier <maz@...nel.org>,
James Morse <james.morse@....com>,
Alexandru Elisei <alexandru.elisei@....com>,
Suzuki K Poulose <suzuki.poulose@....com>,
Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will@...nel.org>, Peter Shier <pshier@...gle.com>,
Ricardo Koller <ricarkol@...gle.com>,
Oliver Upton <oupton@...gle.com>,
Reiji Watanabe <reijiw@...gle.com>,
Jing Zhang <jingzhangos@...gle.com>,
linux-arm-kernel@...ts.infradead.org, kvmarm@...ts.cs.columbia.edu,
linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Subject: Re: [PATCH v4 09/18] KVM: arm64: selftests: Add guest support to get
the vcpuid
On Fri, Sep 10, 2021 at 1:10 AM Andrew Jones <drjones@...hat.com> wrote:
>
> On Thu, Sep 09, 2021 at 10:10:56AM -0700, Raghavendra Rao Ananta wrote:
> > On Thu, Sep 9, 2021 at 12:56 AM Andrew Jones <drjones@...hat.com> wrote:
> > >
> > > On Thu, Sep 09, 2021 at 01:38:09AM +0000, Raghavendra Rao Ananta wrote:
> ...
> > > > + for (i = 0; i < KVM_MAX_VCPUS; i++) {
> > > > + vcpuid = vcpuid_map[i].vcpuid;
> > > > + GUEST_ASSERT_1(vcpuid != VM_VCPUID_MAP_INVAL, mpidr);
> > >
> > > We don't want this assert if it's possible to have sparse maps, which
> > > it probably isn't ever going to be, but...
> > >
> > If you look at the way the array is arranged, the element with
> > VM_VCPUID_MAP_INVAL acts as a sentinel for us and all the proper
> > elements would lie before this. So, I don't think we'd have a sparse
> > array here.
>
> If we switch to my suggestion of adding map entries at vcpu-add time and
> removing them at vcpu-rm time, then the array may become sparse depending
> on the order of removals.
>
Oh, I get it now. But like you mentioned, we add entries to the map
while the vCPUs are getting added and then sync_global_to_guest()
later. This seems like a lot of maintainance, unless I'm interpreting
it wrong or not seeing an advantage.
I like your idea of coming up an arch-independent interface, however.
So I modified it similar to the familiar ucall interface that we have
and does everything in one shot to avoid any confusion:
diff --git a/tools/testing/selftests/kvm/include/kvm_util.h
b/tools/testing/selftests/kvm/include/kvm_util.h
index 010b59b13917..0e87cb0c980b 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -400,4 +400,24 @@ uint64_t get_ucall(struct kvm_vm *vm, uint32_t
vcpu_id, struct ucall *uc);
int vm_get_stats_fd(struct kvm_vm *vm);
int vcpu_get_stats_fd(struct kvm_vm *vm, uint32_t vcpuid);
+#define VM_CPUID_MAP_INVAL -1
+
+struct vm_cpuid_map {
+ uint64_t hw_cpuid;
+ int vcpuid;
+};
+
+/*
+ * Create a vcpuid:hw_cpuid map and export it to the guest
+ *
+ * Input Args:
+ * vm - KVM VM.
+ *
+ * Output Args: None
+ *
+ * Must be called after all the vCPUs are added to the VM
+ */
+void vm_cpuid_map_init(struct kvm_vm *vm);
+int guest_get_vcpuid(void);
+
#endif /* SELFTEST_KVM_UTIL_H */
diff --git a/tools/testing/selftests/kvm/lib/aarch64/processor.c
b/tools/testing/selftests/kvm/lib/aarch64/processor.c
index db64ee206064..e796bb3984a6 100644
--- a/tools/testing/selftests/kvm/lib/aarch64/processor.c
+++ b/tools/testing/selftests/kvm/lib/aarch64/processor.c
@@ -16,6 +16,8 @@
static vm_vaddr_t exception_handlers;
+static struct vm_cpuid_map cpuid_map[KVM_MAX_VCPUS];
+
static uint64_t page_align(struct kvm_vm *vm, uint64_t v)
{
return (v + vm->page_size) & ~(vm->page_size - 1);
@@ -426,3 +428,42 @@ void vm_install_exception_handler(struct kvm_vm
*vm, int vector,
assert(vector < VECTOR_NUM);
handlers->exception_handlers[vector][0] = handler;
}
+
+void vm_cpuid_map_init(struct kvm_vm *vm)
+{
+ int i = 0;
+ struct vcpu *vcpu;
+ struct vm_cpuid_map *map;
+
+ TEST_ASSERT(!list_empty(&vm->vcpus), "vCPUs must have been created\n");
+
+ list_for_each_entry(vcpu, &vm->vcpus, list) {
+ map = &cpuid_map[i++];
+ map->vcpuid = vcpu->id;
+ get_reg(vm, vcpu->id,
KVM_ARM64_SYS_REG(SYS_MPIDR_EL1), &map->hw_cpuid);
+ map->hw_cpuid &= MPIDR_HWID_BITMASK;
+ }
+
+ if (i < KVM_MAX_VCPUS)
+ cpuid_map[i].vcpuid = VM_CPUID_MAP_INVAL;
+
+ sync_global_to_guest(vm, cpuid_map);
+}
+
+int guest_get_vcpuid(void)
+{
+ int i, vcpuid;
+ uint64_t mpidr = read_sysreg(mpidr_el1) & MPIDR_HWID_BITMASK;
+
+ for (i = 0; i < KVM_MAX_VCPUS; i++) {
+ vcpuid = cpuid_map[i].vcpuid;
+
+ /* Was this vCPU added to the VM after the map was
initialized? */
+ GUEST_ASSERT_1(vcpuid != VM_CPUID_MAP_INVAL, mpidr);
+
+ if (mpidr == cpuid_map[i].hw_cpuid)
+ return vcpuid;
+ }
+
+ /* We should not be reaching here */
+ GUEST_ASSERT_1(0, mpidr);
+ return -1;
+}
This would ensure that we don't have a sparse array and can use the
last non-vCPU element as a sentinal node.
If you still feel preparing the map as and when the vCPUs are created
makes more sense, I can go for it.
Regards,
Raghavendra
> Thanks,
> drew
>
Powered by blists - more mailing lists