[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <2F989294-F0D4-4F1C-86A6-E657F60EF2A8@amacapital.net>
Date: Mon, 29 Mar 2021 16:26:41 -0700
From: Andy Lutomirski <luto@...capital.net>
To: Marco Elver <elver@...gle.com>
Cc: Dave Hansen <dave.hansen@...el.com>,
"Sarvela, Tomi P" <tomi.p.sarvela@...el.com>,
kasan-dev <kasan-dev@...glegroups.com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Andy Lutomirski <luto@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
the arch/x86 maintainers <x86@...nel.org>,
"H. Peter Anvin" <hpa@...or.com>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: I915 CI-run with kfence enabled, issues found
> On Mar 29, 2021, at 2:55 PM, Marco Elver <elver@...gle.com> wrote:
>
> On Mon, 29 Mar 2021 at 23:47, Andy Lutomirski <luto@...capital.net> wrote:
>>
>>
>>>> On Mar 29, 2021, at 2:34 PM, Marco Elver <elver@...gle.com> wrote:
>>>
>>> On Mon, 29 Mar 2021 at 23:03, Dave Hansen <dave.hansen@...el.com> wrote:
>>>>> On 3/29/21 10:45 AM, Marco Elver wrote:
>>>>>> On Mon, 29 Mar 2021 at 19:32, Dave Hansen <dave.hansen@...el.com> wrote:
>>>>> Doing it to all CPUs is too expensive, and we can tolerate this being
>>>>> approximate (nothing bad will happen, KFENCE might just miss a bug and
>>>>> that's ok).
>>>> ...
>>>>>> BTW, the preempt checks in flush_tlb_one_kernel() are dependent on KPTI
>>>>>> being enabled. That's probably why you don't see this everywhere. We
>>>>>> should probably have unconditional preempt checks in there.
>>>>>
>>>>> In which case I'll add a preempt_disable/enable() pair to
>>>>> kfence_protect_page() in arch/x86/include/asm/kfence.h.
>>>>
>>>> That sounds sane to me. I'd just plead that the special situation (not
>>>> needing deterministic TLB flushes) is obvious. We don't want any folks
>>>> copying this code.
>>>>
>>>> BTW, I know you want to avoid the cost of IPIs, but have you considered
>>>> any other low-cost ways to get quicker TLB flushes? For instance, you
>>>> could loop over all CPUs and set cpu_tlbstate.invalidate_other=1. That
>>>> would induce a context switch at the next context switch without needing
>>>> an IPI.
>>>
>>> This is interesting. And it seems like it would work well for our
>>> usecase. Ideally we should only flush entries related to the page we
>>> changed. But it seems invalidate_other would flush the entire TLB.
>>>
>>> With PTI, flush_tlb_one_kernel() already does that for the current
>>> CPU, but now we'd flush entire TLBs for all CPUs and even if PTI is
>>> off.
>>>
>>> Do you have an intuition for how much this would affect large
>>> multi-socket systems? I currently can't quite say, and would err on
>>> the side of caution.
>>
>> Flushing the kernel TLB for all addresses
>> Is rather pricy. ISTR 600 cycles on Skylake, not to mention the cost of losing the TLB. How common is this?
>
> AFAIK, invalidate_other resets the asid, so it's not explicit and
> perhaps cheaper?
>
> In any case, if we were to do this, it'd be based on the sample
> interval of KFENCE, which can be as low as 1ms. But this is a
> production debugging feature, so the target machines are not test
> machines. For those production deployments we'd be looking at every
> ~500ms. But I know of other deployments that use <100ms.
>
> Doesn't sound like much, but as you say, I also worry a bit about
> losing the TLB across >100 CPUs even if it's every 500ms.
On non-PTI, the only way to zap kernel mappings is to do a global flush, either via INVPCID (expensive) or CR4 (extra expensive). In PTI mode, it’s plausible that the implicit flush is good enough, and I’d be happy to review the patch, but it’s a PTI only thing. Much less expensive in PTI mode, too, because it only needs to flush kernel mappings.
If this is best-effort, it might be better to have some work in the exit to usermode path or a thread or similar that periodically does targeting single-page zaps.
Powered by blists - more mailing lists