lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5aea3131a1166e30e12f9a5ef490327607219193.camel@redhat.com>
Date:   Tue, 07 Nov 2023 20:23:32 +0200
From:   Maxim Levitsky <mlevitsk@...hat.com>
To:     Vitaly Kuznetsov <vkuznets@...hat.com>, kvm@...r.kernel.org,
        Paolo Bonzini <pbonzini@...hat.com>,
        Sean Christopherson <seanjc@...gle.com>
Cc:     linux-kernel@...r.kernel.org
Subject: Re: [PATCH 08/14] KVM: selftests: Make all Hyper-V tests explicitly
 dependent on Hyper-V emulation support in KVM

On Wed, 2023-10-25 at 17:24 +0200, Vitaly Kuznetsov wrote:
> In preparation for conditional Hyper-V emulation enablement in KVM, make
> Hyper-V specific tests check skip gracefully instead of failing when the
> support is not there.
> 
> Signed-off-by: Vitaly Kuznetsov <vkuznets@...hat.com>
> ---
>  tools/testing/selftests/kvm/x86_64/hyperv_clock.c            | 2 ++
>  tools/testing/selftests/kvm/x86_64/hyperv_evmcs.c            | 5 +++--
>  .../selftests/kvm/x86_64/hyperv_extended_hypercalls.c        | 2 ++
>  tools/testing/selftests/kvm/x86_64/hyperv_features.c         | 2 ++
>  tools/testing/selftests/kvm/x86_64/hyperv_ipi.c              | 2 ++
>  tools/testing/selftests/kvm/x86_64/hyperv_svm_test.c         | 1 +
>  tools/testing/selftests/kvm/x86_64/hyperv_tlb_flush.c        | 2 ++
>  7 files changed, 14 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/testing/selftests/kvm/x86_64/hyperv_clock.c b/tools/testing/selftests/kvm/x86_64/hyperv_clock.c
> index f25749eaa6a8..f5e1e98f04f9 100644
> --- a/tools/testing/selftests/kvm/x86_64/hyperv_clock.c
> +++ b/tools/testing/selftests/kvm/x86_64/hyperv_clock.c
> @@ -211,6 +211,8 @@ int main(void)
>  	vm_vaddr_t tsc_page_gva;
>  	int stage;
>  
> +	TEST_REQUIRE(kvm_has_cap(KVM_CAP_HYPERV_TIME));
> +
Makes sense.
>  	vm = vm_create_with_one_vcpu(&vcpu, guest_main);
>  
>  	vcpu_set_hv_cpuid(vcpu);
> diff --git a/tools/testing/selftests/kvm/x86_64/hyperv_evmcs.c b/tools/testing/selftests/kvm/x86_64/hyperv_evmcs.c
> index 7bde0c4dfdbd..4c7257ecd2a6 100644
> --- a/tools/testing/selftests/kvm/x86_64/hyperv_evmcs.c
> +++ b/tools/testing/selftests/kvm/x86_64/hyperv_evmcs.c
> @@ -240,11 +240,12 @@ int main(int argc, char *argv[])
>  	struct ucall uc;
>  	int stage;
>  
> -	vm = vm_create_with_one_vcpu(&vcpu, guest_code);
> -
>  	TEST_REQUIRE(kvm_cpu_has(X86_FEATURE_VMX));
>  	TEST_REQUIRE(kvm_has_cap(KVM_CAP_NESTED_STATE));
>  	TEST_REQUIRE(kvm_has_cap(KVM_CAP_HYPERV_ENLIGHTENED_VMCS));
> +	TEST_REQUIRE(kvm_has_cap(KVM_CAP_HYPERV_DIRECT_TLBFLUSH));

The test indeed uses the direct TLB flush.

It might be a good idea in the future to rename the test to hyperv_nested_vmx or something like
that because it tests more than just the evmcs.
It's not urgent though.

> +
> +	vm = vm_create_with_one_vcpu(&vcpu, guest_code);
>  
>  	hcall_page = vm_vaddr_alloc_pages(vm, 1);
>  	memset(addr_gva2hva(vm, hcall_page), 0x0,  getpagesize());
> diff --git a/tools/testing/selftests/kvm/x86_64/hyperv_extended_hypercalls.c b/tools/testing/selftests/kvm/x86_64/hyperv_extended_hypercalls.c
> index e036db1f32b9..949e08e98f31 100644
> --- a/tools/testing/selftests/kvm/x86_64/hyperv_extended_hypercalls.c
> +++ b/tools/testing/selftests/kvm/x86_64/hyperv_extended_hypercalls.c
> @@ -43,6 +43,8 @@ int main(void)
>  	uint64_t *outval;
>  	struct ucall uc;
>  
> +	TEST_REQUIRE(kvm_has_cap(KVM_CAP_HYPERV_CPUID));
Yep, the test uses KVM_GET_SUPPORTED_HV_CPUID.
> +
>  	/* Verify if extended hypercalls are supported */
>  	if (!kvm_cpuid_has(kvm_get_supported_hv_cpuid(),
>  			   HV_ENABLE_EXTENDED_HYPERCALLS)) {
> diff --git a/tools/testing/selftests/kvm/x86_64/hyperv_features.c b/tools/testing/selftests/kvm/x86_64/hyperv_features.c
> index 9f28aa276c4e..387c605a3077 100644
> --- a/tools/testing/selftests/kvm/x86_64/hyperv_features.c
> +++ b/tools/testing/selftests/kvm/x86_64/hyperv_features.c
> @@ -690,6 +690,8 @@ static void guest_test_hcalls_access(void)
>  
>  int main(void)
>  {
> +	TEST_REQUIRE(kvm_has_cap(KVM_CAP_HYPERV_ENFORCE_CPUID));
> +
Correct.
>  	pr_info("Testing access to Hyper-V specific MSRs\n");
>  	guest_test_msrs_access();
>  
> diff --git a/tools/testing/selftests/kvm/x86_64/hyperv_ipi.c b/tools/testing/selftests/kvm/x86_64/hyperv_ipi.c
> index 6feb5ddb031d..65e5f4c05068 100644
> --- a/tools/testing/selftests/kvm/x86_64/hyperv_ipi.c
> +++ b/tools/testing/selftests/kvm/x86_64/hyperv_ipi.c
> @@ -248,6 +248,8 @@ int main(int argc, char *argv[])
>  	int stage = 1, r;
>  	struct ucall uc;
>  
> +	TEST_REQUIRE(kvm_has_cap(KVM_CAP_HYPERV_SEND_IPI));
Correct.
> +
>  	vm = vm_create_with_one_vcpu(&vcpu[0], sender_guest_code);
>  
>  	/* Hypercall input/output */
> diff --git a/tools/testing/selftests/kvm/x86_64/hyperv_svm_test.c b/tools/testing/selftests/kvm/x86_64/hyperv_svm_test.c
> index 6c1278562090..c9b18707edc0 100644
> --- a/tools/testing/selftests/kvm/x86_64/hyperv_svm_test.c
> +++ b/tools/testing/selftests/kvm/x86_64/hyperv_svm_test.c
> @@ -158,6 +158,7 @@ int main(int argc, char *argv[])
>  	int stage;
>  
>  	TEST_REQUIRE(kvm_cpu_has(X86_FEATURE_SVM));
> +	TEST_REQUIRE(kvm_has_cap(KVM_CAP_HYPERV_DIRECT_TLBFLUSH));

Maybe also rename the test in the future to hyperv_nested_svm or something like that,
for the same reason.

>  
>  	/* Create VM */
>  	vm = vm_create_with_one_vcpu(&vcpu, guest_code);
> diff --git a/tools/testing/selftests/kvm/x86_64/hyperv_tlb_flush.c b/tools/testing/selftests/kvm/x86_64/hyperv_tlb_flush.c
> index 4758b6ef5618..c4443f71f8dd 100644
> --- a/tools/testing/selftests/kvm/x86_64/hyperv_tlb_flush.c
> +++ b/tools/testing/selftests/kvm/x86_64/hyperv_tlb_flush.c
> @@ -590,6 +590,8 @@ int main(int argc, char *argv[])
>  	struct ucall uc;
>  	int stage = 1, r, i;
>  
> +	TEST_REQUIRE(kvm_has_cap(KVM_CAP_HYPERV_TLBFLUSH));
Also makes sense.

> +
>  	vm = vm_create_with_one_vcpu(&vcpu[0], sender_guest_code);
>  
>  	/* Test data page */


Reviewed-by: Maxim Levitsky <mlevitsk@...hat.com>

Best regards,
	Maxim Levitsky



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ