lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAMj1kXHzK0pSjuRYcZ3E2PQzCx4PTAC-UDHirgFDPYEyLMtoeA@mail.gmail.com>
Date: Sun, 31 Aug 2025 15:11:04 +0200
From: Ard Biesheuvel <ardb@...nel.org>
To: Borislav Petkov <bp@...en8.de>
Cc: Ard Biesheuvel <ardb+git@...gle.com>, linux-kernel@...r.kernel.org, 
	linux-efi@...r.kernel.org, x86@...nel.org, Ingo Molnar <mingo@...nel.org>, 
	Kevin Loughlin <kevinloughlin@...gle.com>, Tom Lendacky <thomas.lendacky@....com>, 
	Josh Poimboeuf <jpoimboe@...nel.org>, Peter Zijlstra <peterz@...radead.org>, 
	Nikunj A Dadhania <nikunj@....com>
Subject: Re: [PATCH v7 05/22] x86/sev: Move GHCB page based HV communication
 out of startup code

On Sun, 31 Aug 2025 at 14:30, Ard Biesheuvel <ardb@...nel.org> wrote:
>
> On Sun, 31 Aug 2025 at 13:15, Borislav Petkov <bp@...en8.de> wrote:
> >
> > On Sun, Aug 31, 2025 at 12:56:41PM +0200, Ard Biesheuvel wrote:
> > > OK it appears I've fixed it in the wrong place: the next patch adds
> > > back the definition of has_cpuflag() so I squashed that hunk into the
> > > wrong patch, it seems.
> >
> > The real question is - and I'm sceptical - whether the startup code runs too
> > early for boot_cpu_has(). And how is the startup code going to call
> > boot_cpu_has().
> >
> > /me builds .s
> >
> > Aha, so it gets converted into a boot_cpu_data access:
> >
> > # arch/x86/boot/startup/sev-shared.c:662:       if (validate && !has_cpuflag(X86_FEATURE_COHERENCY_SFW_NO))
> >         testb   %r13b, %r13b    # validate
> >         je      .L46    #,
> > # ./arch/x86/include/asm/bitops.h:206:          (addr[nr >> _BITOPS_LONG_SHIFT])) != 0;
> >         movq    80+boot_cpu_data(%rip), %rax    # MEM[(const volatile long unsigned int *)&boot_cpu_data + 80B], _15
> > # arch/x86/boot/startup/sev-shared.c:662:       if (validate && !has_cpuflag(X86_FEATURE_COHERENCY_SFW_NO))
> >
> > But former question remains: AFAIK, you want to run the startup code waaay
> > earlier, before we do identify_boot_cpu() which prepares boot_cpu_data, right?
> >
>
> I suppose that in this particular case, things work out fine because
> calling sev_evict_cache() unnecessarily is harmless. But I agree that
> in general, relying on CPU flags in code that may be called this early
> is not great.
>
> Perhaps this conditional should be moved into the caller instead
> (early_set_pages_state()), and early callers from inside the startup
> code should call sev_evict_cache() unconditionally?

Alternatively, we might consider the below:

diff --git a/arch/x86/boot/compressed/sev.c b/arch/x86/boot/compressed/sev.c
index 235e557fd10c..bc59a421c7b4 100644
--- a/arch/x86/boot/compressed/sev.c
+++ b/arch/x86/boot/compressed/sev.c
@@ -342,6 +342,8 @@
        if (!(eax & BIT(1)))
                return -ENODEV;

+       sev_snp_needs_sfw = !(ebx & BIT(31));
+
        return ebx & 0x3f;
 }

diff --git a/arch/x86/boot/startup/sev-shared.c
b/arch/x86/boot/startup/sev-shared.c
index 8d2476e1ad3b..08cc1568d8af 100644
--- a/arch/x86/boot/startup/sev-shared.c
+++ b/arch/x86/boot/startup/sev-shared.c
@@ -31,6 +31,8 @@
 static u32 cpuid_hyp_range_max __ro_after_init;
 static u32 cpuid_ext_range_max __ro_after_init;

+bool sev_snp_needs_sfw;
+
 void __noreturn
 sev_es_terminate(unsigned int set, unsigned int reason)
 {
@@ -639,7 +641,7 @@
         * If validating memory (making it private) and affected by the
         * cache-coherency vulnerability, perform the cache eviction mitigation.
         */
-       if (validate && !has_cpuflag(X86_FEATURE_COHERENCY_SFW_NO))
+       if (validate && sev_snp_needs_sfw)
                sev_evict_cache((void *)vaddr, 1);
 }

diff --git a/arch/x86/boot/startup/sme.c b/arch/x86/boot/startup/sme.c
index 39e7e9d18974..2ddde901c8c5 100644
--- a/arch/x86/boot/startup/sme.c
+++ b/arch/x86/boot/startup/sme.c
@@ -521,6 +521,7 @@
                return;

        me_mask = 1UL << (ebx & 0x3f);
+       sev_snp_needs_sfw = !(ebx & BIT(31));

        /* Check the SEV MSR whether SEV or SME is enabled */
        sev_status = msr = native_rdmsrq(MSR_AMD64_SEV);
diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h
index d3f0f17834fa..32178b8f9b87 100644
--- a/arch/x86/include/asm/sev.h
+++ b/arch/x86/include/asm/sev.h
@@ -570,6 +570,8 @@
 extern u16 ghcb_version;
 extern struct ghcb *boot_ghcb;

+extern bool sev_snp_needs_sfw;
+
 struct psc_desc {
        enum psc_op op;
        struct svsm_ca *ca;

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ