[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <2cf3ae4b-b623-1489-0700-20ac3c24d5a1@amd.com>
Date: Fri, 1 May 2020 09:16:35 -0500
From: Tom Lendacky <thomas.lendacky@....com>
To: "Singh, Balbir" <sblbir@...zon.com>,
"tglx@...utronix.de" <tglx@...utronix.de>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Cc: "keescook@...omium.org" <keescook@...omium.org>,
"tony.luck@...el.com" <tony.luck@...el.com>,
"benh@...nel.crashing.org" <benh@...nel.crashing.org>,
"jpoimboe@...hat.com" <jpoimboe@...hat.com>,
"x86@...nel.org" <x86@...nel.org>,
"dave.hansen@...el.com" <dave.hansen@...el.com>
Subject: Re: [PATCH v4 1/6] arch/x86/kvm: Refactor l1d flush lifecycle
management
On 4/30/20 10:48 PM, Singh, Balbir wrote:
> On Sat, 2020-04-25 at 11:49 +1000, Balbir Singh wrote:
>> On Fri, 2020-04-24 at 13:59 -0500, Tom Lendacky wrote:
>>>
>>> On 4/23/20 9:01 AM, Balbir Singh wrote:
>>>> Split out the allocation and free routines to be used in a follow
>>>> up set of patches (to reuse for L1D flushing).
>>>>
>>>> Signed-off-by: Balbir Singh <sblbir@...zon.com>
>>>> Reviewed-by: Kees Cook <keescook@...omium.org>
>>>> ---
>>>> arch/x86/include/asm/cacheflush.h | 3 +++
>>>> arch/x86/kernel/Makefile | 1 +
>>>> arch/x86/kernel/l1d_flush.c | 36 +++++++++++++++++++++++++++++++
>>>> arch/x86/kvm/vmx/vmx.c | 25 +++------------------
>>>> 4 files changed, 43 insertions(+), 22 deletions(-)
>>>> create mode 100644 arch/x86/kernel/l1d_flush.c
>>>>
>>>> diff --git a/arch/x86/include/asm/cacheflush.h
>>>> b/arch/x86/include/asm/cacheflush.h
>>>> index 63feaf2a5f93..bac56fcd9790 100644
>>>> --- a/arch/x86/include/asm/cacheflush.h
>>>> +++ b/arch/x86/include/asm/cacheflush.h
>>>> @@ -6,6 +6,9 @@
>>>> #include <asm-generic/cacheflush.h>
>>>> #include <asm/special_insns.h>
>>>>
>>>> +#define L1D_CACHE_ORDER 4
>>>
>>> Since this is becoming a generic function now, shouldn't this value be
>>> based on the actual L1D cache size? Is this value based on a 32KB data
>>> cache and the idea is to write twice the size of the cache to be sure that
>>> every entry has been replaced - with the second 32KB to catch the odd line
>>> that might not have been pulled in?
>>>
>>
>> Currently the only users are VMX L1TF and optional prctl(). It should be
>> based
>> on actual L1D cache size, I checked a little bit and the largest L1D cache
>> size across various x86 bits is 64K. so there are three options here:
>>
>> 1. We refactor the code, we would need to save the L1D cache size and use
>> cpu_dev callbacks for L1D flush
>> 2. We can make the current code depend on L1D_FLUSH MSR and enable it only
>> when that feature is available. There would be no software fallback. Then
>> follow it up with #1
>> 3. We keep the code as is on the assumption that all of L1D <= 64K across
>> the
>> current platforms and we do #1 in a followup (since the prctl is optional
>> and
>> the only other user is the VMX code).
>>
>> Thanks for the review,
>> Balbir Singh.
>>
>
> Tom,
>
> I have the following changes that I think will suffice for now (these are not
> properly formatted, but you get the idea)
>
> diff --git a/arch/x86/kernel/l1d_flush.c b/arch/x86/kernel/l1d_flush.c
> index a754b6c288a9..7fec0cc8f85c 100644
> --- a/arch/x86/kernel/l1d_flush.c
> +++ b/arch/x86/kernel/l1d_flush.c
> @@ -92,6 +92,9 @@ int l1d_flush_init_once(void)
> {
> int ret = 0;
>
> + if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL)
> + return -ENOTSUPP;
> +
> if (static_cpu_has(X86_FEATURE_FLUSH_L1D) || l1d_flush_pages)
> return ret;
>
>
> Does that satisfy your comments about patch 1/6 and 2/6 in the series?
Yes, that works.
Thanks,
Tom
>
> Balbir Singh.
>
Powered by blists - more mailing lists