[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aa044725031407b86e7b6edf8a9426166242b8d4.camel@redhat.com>
Date: Wed, 03 Nov 2021 14:49:48 +0200
From: Maxim Levitsky <mlevitsk@...hat.com>
To: linux-kernel@...r.kernel.org
Cc: "maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" <x86@...nel.org>,
"open list:CRYPTO API" <linux-crypto@...r.kernel.org>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Thomas Gleixner <tglx@...utronix.de>,
Herbert Xu <herbert@...dor.apana.org.au>,
Borislav Petkov <bp@...en8.de>,
Paolo Bonzini <pbonzini@...hat.com>,
Ingo Molnar <mingo@...hat.com>,
"David S. Miller" <davem@...emloft.net>,
"H. Peter Anvin" <hpa@...or.com>,
Tim Chen <tim.c.chen@...ux.intel.com>
Subject: Re: [PATCH] crypto: x86/aes-ni: fix AVX detection
On Wed, 2021-11-03 at 14:46 +0200, Maxim Levitsky wrote:
> Fix two semi-theoretical issues that are present.
>
> 1. AVX is assumed to be present when AVX2 is present.
> That can be false in a VM.
> This can be considered a hypervisor bug,
> but the kernel should not crash in this case if this is possible.
>
> 2. YMM state can be soft disabled in XCR0.
>
> Fix both issues by using 'cpu_has_xfeatures(XFEATURE_MASK_YMM')
> to check for usable AVX support.
>
> Fixes: d764593af9249 ("crypto: aesni - AVX and AVX2 version of AESNI-GCM encode and decode")
>
> Signed-off-by: Maxim Levitsky <mlevitsk@...hat.com>
I forgot to mention that Paolo Bonzini helped me with this patch,
especially with the way to detect XCR0 bits.
Best regards,
Maxim Levitsky
> ---
> arch/x86/crypto/aesni-intel_glue.c | 25 +++++++++++++------------
> 1 file changed, 13 insertions(+), 12 deletions(-)
>
> diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c
> index 0fc961bef299c..20db1e500ef6f 100644
> --- a/arch/x86/crypto/aesni-intel_glue.c
> +++ b/arch/x86/crypto/aesni-intel_glue.c
> @@ -1147,24 +1147,25 @@ static int __init aesni_init(void)
> if (!x86_match_cpu(aesni_cpu_id))
> return -ENODEV;
> #ifdef CONFIG_X86_64
> - if (boot_cpu_has(X86_FEATURE_AVX2)) {
> - pr_info("AVX2 version of gcm_enc/dec engaged.\n");
> - static_branch_enable(&gcm_use_avx);
> - static_branch_enable(&gcm_use_avx2);
> - } else
> - if (boot_cpu_has(X86_FEATURE_AVX)) {
> - pr_info("AVX version of gcm_enc/dec engaged.\n");
> + if (cpu_has_xfeatures(XFEATURE_MASK_YMM, NULL)) {
> +
> static_branch_enable(&gcm_use_avx);
> - } else {
> - pr_info("SSE version of gcm_enc/dec engaged.\n");
> - }
> - if (boot_cpu_has(X86_FEATURE_AVX)) {
> +
> + if (boot_cpu_has(X86_FEATURE_AVX2)) {
> + static_branch_enable(&gcm_use_avx2);
> + pr_info("AVX2 version of gcm_enc/dec engaged.\n");
> + } else {
> + pr_info("AVX version of gcm_enc/dec engaged.\n");
> + }
> +
> /* optimize performance of ctr mode encryption transform */
> static_call_update(aesni_ctr_enc_tfm, aesni_ctr_enc_avx_tfm);
> pr_info("AES CTR mode by8 optimization enabled\n");
> +
> + } else {
> + pr_info("SSE version of gcm_enc/dec engaged.\n");
> }
> #endif
> -
> err = crypto_register_alg(&aesni_cipher_alg);
> if (err)
> return err;
Powered by blists - more mailing lists