[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZS8wdNtAoSvH_jpX@google.com>
Date: Tue, 17 Oct 2023 18:10:12 -0700
From: Sean Christopherson <seanjc@...gle.com>
To: Michael Roth <michael.roth@....com>
Cc: kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
pbonzini@...hat.com, vannapurve@...gle.com
Subject: Re: [PATCH gmem] KVM: selftests: Fix gmem conversion tests for
multiple vCPUs
On Mon, Oct 16, 2023, Michael Roth wrote:
> Currently the private_mem_conversions_test crashes if invoked with the
> -n <num_vcpus> option without also specifying multiple memslots via -m.
Totally a PEBKAC, not a bug ;-)
> This is because the current implementation assumes -m is specified and
> always sets up the per-vCPU memory with a dedicated memslot for each
> vCPU. When -m is not specified, the test skips setting up
> memslots/memory for secondary vCPUs.
>
> The current code does seem to try to handle using a single memslot for
> multiple vCPUs in some places, e.g. the call-site, but
> test_mem_conversions() is missing the important bit of sizing the single
> memslot appropriately to handle all the per-vCPU memory. Implement that
> handling.
>
> Signed-off-by: Michael Roth <michael.roth@....com>
> ---
> .../kvm/x86_64/private_mem_conversions_test.c | 12 ++++++++----
> 1 file changed, 8 insertions(+), 4 deletions(-)
>
> diff --git a/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c b/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c
> index c04e7d61a585..5eb693fead33 100644
> --- a/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c
> +++ b/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c
> @@ -388,10 +388,14 @@ static void test_mem_conversions(enum vm_mem_backing_src_type src_type, uint32_t
> gmem_flags = 0;
> memfd = vm_create_guest_memfd(vm, memfd_size, gmem_flags);
>
> - for (i = 0; i < nr_memslots; i++)
> - vm_mem_add(vm, src_type, BASE_DATA_GPA + size * i,
> - BASE_DATA_SLOT + i, size / vm->page_size,
> - KVM_MEM_PRIVATE, memfd, size * i);
> + if (nr_memslots == 1)
> + vm_mem_add(vm, src_type, BASE_DATA_GPA, BASE_DATA_SLOT,
> + memfd_size / vm->page_size, KVM_MEM_PRIVATE, memfd, 0);
> + else
> + for (i = 0; i < nr_memslots; i++)
The if-else needs curly braces.
> + vm_mem_add(vm, src_type, BASE_DATA_GPA + size * i,
> + BASE_DATA_SLOT + i, size / vm->page_size,
> + KVM_MEM_PRIVATE, memfd, size * i);
But I think that's a moot point, because isn't it easier to do this?
diff --git a/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c b/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c
index c04e7d61a585..c99073098f98 100644
--- a/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c
+++ b/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c
@@ -367,6 +367,7 @@ static void test_mem_conversions(enum vm_mem_backing_src_type src_type, uint32_t
*/
const size_t size = align_up(PER_CPU_DATA_SIZE, get_backing_src_pagesz(src_type));
const size_t memfd_size = size * nr_vcpus;
+ const size_t slot_size = memfd_size / nr_memslots;
struct kvm_vcpu *vcpus[KVM_MAX_VCPUS];
pthread_t threads[KVM_MAX_VCPUS];
uint64_t gmem_flags;
@@ -390,7 +391,7 @@ static void test_mem_conversions(enum vm_mem_backing_src_type src_type, uint32_t
for (i = 0; i < nr_memslots; i++)
vm_mem_add(vm, src_type, BASE_DATA_GPA + size * i,
- BASE_DATA_SLOT + i, size / vm->page_size,
+ BASE_DATA_SLOT + i, slot_size / vm->page_size,
KVM_MEM_PRIVATE, memfd, size * i);
for (i = 0; i < nr_vcpus; i++) {
Powered by blists - more mailing lists