lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 29 Dec 2021 00:21:06 +0000
From:   Sean Christopherson <seanjc@...gle.com>
To:     Jing Liu <jing2.liu@...el.com>
Cc:     x86@...nel.org, kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
        linux-doc@...r.kernel.org, linux-kselftest@...r.kernel.org,
        tglx@...utronix.de, mingo@...hat.com, bp@...en8.de,
        dave.hansen@...ux.intel.com, pbonzini@...hat.com, corbet@....net,
        shuah@...nel.org, jun.nakajima@...el.com, kevin.tian@...el.com,
        jing2.liu@...ux.intel.com, guang.zeng@...el.com,
        wei.w.wang@...el.com, yang.zhong@...el.com
Subject: Re: [PATCH v3 16/22] kvm: x86: Add XCR0 support for Intel AMX

On Wed, Dec 22, 2021, Jing Liu wrote:
> Two XCR0 bits are defined for AMX to support XSAVE mechanism. Bit 17
> is for tilecfg and bit 18 is for tiledata.
> 
> The value of XCR0[17:18] is always either 00b or 11b.

Is that an SDM requirement, or an arbitrary Linux/KVM requirement?

> Also, SDM
> recommends that only 64-bit operating systems enable Intel AMX by
> setting XCR0[18:17]. If a 32-bit guest tries to set dynamic bits, it

This is wrong.  It has nothing to do with 32-bit guests.  The restriction is on
32-bit _host kernels_, which I'm guessing never set the tile bits in _host_ XCR0.

> fails to pass vcpu->arch.guest_supported_xcr0 check and gets a #GP.
> 
> Signed-off-by: Yang Zhong <yang.zhong@...el.com>
> Signed-off-by: Jing Liu <jing2.liu@...el.com>
> ---
>  arch/x86/kvm/x86.c | 8 +++++++-
>  1 file changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index a48a89f73027..c558c098979a 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -210,7 +210,7 @@ static struct kvm_user_return_msrs __percpu *user_return_msrs;
>  #define KVM_SUPPORTED_XCR0     (XFEATURE_MASK_FP | XFEATURE_MASK_SSE \
>  				| XFEATURE_MASK_YMM | XFEATURE_MASK_BNDREGS \
>  				| XFEATURE_MASK_BNDCSR | XFEATURE_MASK_AVX512 \
> -				| XFEATURE_MASK_PKRU)
> +				| XFEATURE_MASK_PKRU | XFEATURE_MASK_XTILE)
>  
>  u64 __read_mostly host_efer;
>  EXPORT_SYMBOL_GPL(host_efer);
> @@ -990,6 +990,12 @@ static int __kvm_set_xcr(struct kvm_vcpu *vcpu, u32 index, u64 xcr)
>  		if ((xcr0 & XFEATURE_MASK_AVX512) != XFEATURE_MASK_AVX512)
>  			return 1;
>  	}
> +
> +#ifdef CONFIG_X86_64

Drop the #ifdef, it adds no meaningful value and requires the reader to think
far harder than they should have.  Yes, it's technically dead code for 32-bit KVM,
but no one cares about performance of 32-bit KVM, and in any case it's extremely
unlikely this will be anything but noise.

> +	if ((xcr0 & XFEATURE_MASK_XTILE) &&
> +	    ((xcr0 & XFEATURE_MASK_XTILE) != XFEATURE_MASK_XTILE))
> +		return 1;
> +#endif
>  	vcpu->arch.xcr0 = xcr0;
>  
>  	if ((xcr0 ^ old_xcr0) & XFEATURE_MASK_EXTEND)
> -- 
> 2.27.0
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ