[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2655bc5d-eac1-7cbe-d3b2-5dc9ad3ffa5e@redhat.com>
Date: Mon, 27 Jan 2020 10:18:11 +0100
From: Thomas Huth <thuth@...hat.com>
To: Ben Gardon <bgardon@...gle.com>, linux-kernel@...r.kernel.org,
kvm@...r.kernel.org, linux-kselftest@...r.kernel.org
Cc: Paolo Bonzini <pbonzini@...hat.com>,
Cannon Matthews <cannonmatthews@...gle.com>,
Peter Xu <peterx@...hat.com>,
Andrew Jones <drjones@...hat.com>,
Peter Shier <pshier@...gle.com>,
Oliver Upton <oupton@...gle.com>
Subject: Re: [PATCH v4 01/10] KVM: selftests: Create a demand paging test
On 23/01/2020 19.04, Ben Gardon wrote:
> While userfaultfd, KVM's demand paging implementation, is not specific
> to KVM, having a benchmark for its performance will be useful for
> guiding performance improvements to KVM. As a first step towards creating
> a userfaultfd demand paging test, create a simple memory access test,
> based on dirty_log_test.
>
> Reviewed-by: Oliver Upton <oupton@...gle.com>
> Signed-off-by: Ben Gardon <bgardon@...gle.com>
> ---
> tools/testing/selftests/kvm/.gitignore | 1 +
> tools/testing/selftests/kvm/Makefile | 3 +
> .../selftests/kvm/demand_paging_test.c | 286 ++++++++++++++++++
> 3 files changed, 290 insertions(+)
> create mode 100644 tools/testing/selftests/kvm/demand_paging_test.c
>
> diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore
> index 30072c3f52fbe..9619d96e15c41 100644
> --- a/tools/testing/selftests/kvm/.gitignore
> +++ b/tools/testing/selftests/kvm/.gitignore
> @@ -17,3 +17,4 @@
> /clear_dirty_log_test
> /dirty_log_test
> /kvm_create_max_vcpus
> +/demand_paging_test
> diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
> index 3138a916574a9..e2e1b92faee3b 100644
> --- a/tools/testing/selftests/kvm/Makefile
> +++ b/tools/testing/selftests/kvm/Makefile
> @@ -28,15 +28,18 @@ TEST_GEN_PROGS_x86_64 += x86_64/vmx_tsc_adjust_test
> TEST_GEN_PROGS_x86_64 += x86_64/xss_msr_test
> TEST_GEN_PROGS_x86_64 += clear_dirty_log_test
> TEST_GEN_PROGS_x86_64 += dirty_log_test
> +TEST_GEN_PROGS_x86_64 += demand_paging_test
> TEST_GEN_PROGS_x86_64 += kvm_create_max_vcpus
>
> TEST_GEN_PROGS_aarch64 += clear_dirty_log_test
> TEST_GEN_PROGS_aarch64 += dirty_log_test
> +TEST_GEN_PROGS_aarch64 += demand_paging_test
> TEST_GEN_PROGS_aarch64 += kvm_create_max_vcpus
>
> TEST_GEN_PROGS_s390x = s390x/memop
> TEST_GEN_PROGS_s390x += s390x/sync_regs_test
> TEST_GEN_PROGS_s390x += dirty_log_test
> +TEST_GEN_PROGS_s390x += demand_paging_test
> TEST_GEN_PROGS_s390x += kvm_create_max_vcpus
I gave your series a quick try on s390x (without patch 10/10 since that
is causing more trouble), but the test does not work there yet:
# selftests: kvm: demand_paging_test
# ==== Test Assertion Failure ====
# lib/kvm_util.c:700: ret == 0
# pid=247240 tid=247240 - Invalid argument
# 1 0x0000000001004085: vm_userspace_mem_region_add at kvm_util.c:695
# 2 0x00000000010042dd: _vm_create at kvm_util.c:233
# 3 0x0000000001001b07: create_vm at demand_paging_test.c:185
# 4 (inlined by) run_test at demand_paging_test.c:387
# 5 (inlined by) main at demand_paging_test.c:676
# 6 0x000003ffb5323461: ?? ??:0
# 7 0x000000000100259d: .annobin_init.c.hot at crt1.o:?
# 8 0xffffffffffffffff: ?? ??:0
# KVM_SET_USER_MEMORY_REGION IOCTL failed,
# rc: -1 errno: 22
# slot: 0 flags: 0x0
# guest_phys_addr: 0x0 size: 0x607000
# Testing guest mode: PA-bits:40, VA-bits:48, 4K pages
not ok 4 selftests: kvm: demand_paging_test # exit=254
I'd suggest to leave it disabled on s390x until the issue has been debugged.
Thomas
Powered by blists - more mailing lists