lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 26 Oct 2022 22:21:13 +0000
From:   Sean Christopherson <seanjc@...gle.com>
To:     Vitaly Kuznetsov <vkuznets@...hat.com>
Cc:     kvm@...r.kernel.org, Paolo Bonzini <pbonzini@...hat.com>,
        Wanpeng Li <wanpengli@...cent.com>,
        Jim Mattson <jmattson@...gle.com>,
        Michael Kelley <mikelley@...rosoft.com>,
        Siddharth Chandrasekaran <sidcha@...zon.de>,
        Yuan Yao <yuan.yao@...ux.intel.com>,
        Maxim Levitsky <mlevitsk@...hat.com>,
        linux-hyperv@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v12 44/46] KVM: selftests: evmcs_test: Introduce L2 TLB
 flush test

On Fri, Oct 21, 2022, Vitaly Kuznetsov wrote:
> @@ -48,6 +49,8 @@ static inline void rdmsr_gs_base(void)
>  
>  void l2_guest_code(void)
>  {
> +	u64 unused;
> +
>  	GUEST_SYNC(7);
>  
>  	GUEST_SYNC(8);
> @@ -64,15 +67,33 @@ void l2_guest_code(void)
>  	vmcall();
>  	rdmsr_gs_base(); /* intercepted */
>  
> +	/* L2 TLB flush tests */
> +	hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE | HV_HYPERCALL_FAST_BIT, 0x0,
> +			 HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES | HV_FLUSH_ALL_PROCESSORS);
> +	rdmsr_fs_base();
> +	/*
> +	 * Note: hypercall status (RAX) is not preserved correctly by L1 after
> +	 * synthetic vmexit, use unchecked version.
> +	 */
> +	__hyperv_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE | HV_HYPERCALL_FAST_BIT, 0x0,
> +			   HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES | HV_FLUSH_ALL_PROCESSORS,
> +			   &unused);
> +	/* Make sure we're not issuing Hyper-V TLB flush call again */
> +	__asm__ __volatile__ ("mov $0xdeadbeef, %rcx");

This needs a clobber.  It won't cause problems in the current code, but it will
make someone really sad if they add more code after this.

Using %ecx instead of %rcx will also suffice, 32-bit accesses clear bits 63:32.

Even better, if a "nop" isn't required to get the compiler to emit preamble, would
be to load ECX through an input constraint, that way the compiler "knows" the value
of ECX and can optimize for it (though it's extremely unlikely 0xdeadbeef will be
a useful value).

Actually, not setting RCX in vmcall() is a nasty bug waiting to happen, e.g. if
RCX just happens to contain a value that gets routed to L0.

Rather than handle this as a one-off, can you insert a prep patch to have the
common vmcall() stuff RCX with a "safe" value?

Related side topic, rdmsr_{f,g}s_base() should also use input constraints, and
should use a proper #define for the MSRs.  Also, why earth do those clobber all GPRs?
Oooh, because they get routed to L1 and L1 doesn't preserve GPRs.

Related side topic #2, KVM's kvm_xen_hypercall() is broken, it checks the wrong
input register for 64-bit (checks RAX instead of RCX).  Not sure that's even a
fixable bug though.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ