[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <28a494ca-3173-4072-921c-6c5f5b257e79@amd.com>
Date: Tue, 20 Feb 2024 08:47:54 -0600
From: Tom Lendacky <thomas.lendacky@....com>
To: Borislav Petkov <bp@...en8.de>
Cc: "Huang, Kai" <kai.huang@...el.com>, linux-kernel@...r.kernel.org,
x86@...nel.org, dave.hansen@...el.com, kirill.shutemov@...ux.intel.com,
tglx@...utronix.de, mingo@...hat.com, hpa@...or.com, luto@...nel.org,
peterz@...radead.org, chao.gao@...el.com, bhe@...hat.com,
nik.borisov@...e.com, pbonzini@...hat.com
Subject: Re: [PATCH 1/4] x86/coco: Add a new CC attribute to unify cache flush
during kexec
On 2/20/24 08:28, Borislav Petkov wrote:
> On Mon, Feb 19, 2024 at 04:09:47PM -0600, Tom Lendacky wrote:
>> That's why the '!(sev_status & MSR_AMD64_SEV_ENABLED)' works here.
>
> I would've never figured that out just from staring at the test. :-\
>
>> Basically, if you are bare-metal, it will return true. And it will only
>> return true for machines that support SME and have the
>> MSR_AMD64_SYSCFG_MEM_ENCRYPT bit set in SYS_CFG MSR because of where the
>> 'cc_vendor = CC_VENDOR_AMD' assignment is. However, if you move the
>> 'cc_vendor = CC_VENDOR_AMD' to before the if statement, then you will have
>> the WBINVD called for any machine that supports SME, even if SME is not
>> possible because the proper bit in the SYS_CFG MSR hasn't been set.
>>
>> I know what I'm trying to say, let me know if it is making sense...
>
> Yah, thanks for taking the time to explain.
>
> Here's an even more radical idea:
>
> Why not do WBINVD *unconditionally* on the CPU down path?
>
> - it is the opposite of a fast path, i.e., no one cares
>
> - it'll take care of every possible configuration without ugly logic
>
> - it wouldn't hurt to have the caches nice and coherent before going
> down
>
> Hmmm.
That's what I initially did, but errors were reported, see commit:
f23d74f6c66c ("x86/mm: Rework wbinvd, hlt operation in stop_this_cpu()")
But that may be because of the issue solved by commit:
1f5e7eb7868e ("x86/smp: Make stop_other_cpus() more robust")
So...
Thanks,
Tom
>
Powered by blists - more mailing lists