lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <91702ceb-afba-450e-819b-52d482d7bd11@app.fastmail.com>
Date: Wed, 19 Nov 2025 09:31:37 -0800
From: "Andy Lutomirski" <luto@...nel.org>
To: "Valentin Schneider" <vschneid@...hat.com>,
 "Linux Kernel Mailing List" <linux-kernel@...r.kernel.org>,
 linux-mm@...ck.org, rcu@...r.kernel.org,
 "the arch/x86 maintainers" <x86@...nel.org>,
 linux-arm-kernel@...ts.infradead.org, loongarch@...ts.linux.dev,
 linux-riscv@...ts.infradead.org, linux-arch@...r.kernel.org,
 linux-trace-kernel@...r.kernel.org
Cc: "Thomas Gleixner" <tglx@...utronix.de>, "Ingo Molnar" <mingo@...hat.com>,
 "Borislav Petkov" <bp@...en8.de>,
 "Dave Hansen" <dave.hansen@...ux.intel.com>,
 "H. Peter Anvin" <hpa@...or.com>,
 "Peter Zijlstra (Intel)" <peterz@...radead.org>,
 "Arnaldo Carvalho de Melo" <acme@...nel.org>,
 "Josh Poimboeuf" <jpoimboe@...nel.org>,
 "Paolo Bonzini" <pbonzini@...hat.com>, "Arnd Bergmann" <arnd@...db.de>,
 "Frederic Weisbecker" <frederic@...nel.org>,
 "Paul E. McKenney" <paulmck@...nel.org>,
 "Jason Baron" <jbaron@...mai.com>,
 "Steven Rostedt" <rostedt@...dmis.org>,
 "Ard Biesheuvel" <ardb@...nel.org>,
 "Sami Tolvanen" <samitolvanen@...gle.com>,
 "David S. Miller" <davem@...emloft.net>,
 "Neeraj Upadhyay" <neeraj.upadhyay@...nel.org>,
 "Joel Fernandes" <joelagnelf@...dia.com>,
 "Josh Triplett" <josh@...htriplett.org>,
 "Boqun Feng" <boqun.feng@...il.com>,
 "Uladzislau Rezki" <urezki@...il.com>,
 "Mathieu Desnoyers" <mathieu.desnoyers@...icios.com>,
 "Mel Gorman" <mgorman@...e.de>,
 "Andrew Morton" <akpm@...ux-foundation.org>,
 "Masahiro Yamada" <masahiroy@...nel.org>,
 "Han Shen" <shenhan@...gle.com>, "Rik van Riel" <riel@...riel.com>,
 "Jann Horn" <jannh@...gle.com>,
 "Dan Carpenter" <dan.carpenter@...aro.org>,
 "Oleg Nesterov" <oleg@...hat.com>, "Juri Lelli" <juri.lelli@...hat.com>,
 "Clark Williams" <williams@...hat.com>,
 "Yair Podemsky" <ypodemsk@...hat.com>,
 "Marcelo Tosatti" <mtosatti@...hat.com>,
 "Daniel Wagner" <dwagner@...e.de>, "Petr Tesarik" <ptesarik@...e.com>,
 "Shrikanth Hegde" <sshegde@...ux.ibm.com>
Subject: Re: [RFC PATCH v7 29/31] x86/mm/pti: Implement a TLB flush immediately after a
 switch to kernel CR3


On Wed, Nov 19, 2025, at 7:44 AM, Valentin Schneider wrote:
> On 19/11/25 06:31, Andy Lutomirski wrote:
>> On Fri, Nov 14, 2025, at 7:14 AM, Valentin Schneider wrote:
>>> Deferring kernel range TLB flushes requires the guarantee that upon
>>> entering the kernel, no stale entry may be accessed. The simplest way to
>>> provide such a guarantee is to issue an unconditional flush upon switching
>>> to the kernel CR3, as this is the pivoting point where such stale entries
>>> may be accessed.
>>>
>>
>> Doing this together with the PTI CR3 switch has no actual benefit: MOV CR3 doesn’t flush global pages. And doing this in asm is pretty gross.  We don’t even get a free sync_core() out of it because INVPCID is not documented as being serializing.
>>
>> Why can’t we do it in C?  What’s the actual risk?  In order to trip over a stale TLB entry, we would need to deference a pointer to newly allocated kernel virtual memory that was not valid prior to our entry into user mode. I can imagine BPF doing this, but plain noinstr C in the entry path?  Especially noinstr C *that has RCU disabled*?  We already can’t follow an RCU pointer, and ISTM the only style of kernel code that might do this would use RCU to protect the pointer, and we are already doomed if we follow an RCU pointer to any sort of memory.
>>
>
> So v4 and earlier had the TLB flush faff done in C in the context_tracking entry
> just like sync_core().
>
> My biggest issue with it was that I couldn't figure out a way to instrument
> memory accesses such that I would get an idea of where vmalloc'd accesses
> happen - even with a hackish thing just to survey the landscape. So while I
> agree with your reasoning wrt entry noinstr code, I don't have any way to
> prove it.
> That's unlike the text_poke sync_core() deferral for which I have all of
> that nice objtool instrumentation.
>
> Dave also pointed out that the whole stale entry flush deferral is a risky
> move, and that the sanest thing would be to execute the deferred flush just
> after switching to the kernel CR3.
>
> See the thread surrounding:
>   https://lore.kernel.org/lkml/20250114175143.81438-30-vschneid@redhat.com/
>
> mainly Dave's reply and subthread:
>   https://lore.kernel.org/lkml/352317e3-c7dc-43b4-b4cb-9644489318d0@intel.com/
>
>> We do need to watch out for NMI/MCE hitting before we flush.

I read a decent fraction of that thread.

Let's consider what we're worried about:

1. Architectural access to a kernel virtual address that has been unmapped, in asm or early C.  If it hasn't been remapped, then we oops anyway.  If it has, then that means we're accessing a pointer where either the pointer has changed or the pointee has been remapped while we're in user mode, and that's a very strange thing to do for anything that the asm points to or that early C points to, unless RCU is involved.  But RCU is already disallowed in the entry paths that might be in extended quiescent states, so I think this is mostly a nonissue.

2. Non-speculative access via GDT access, etc.  We can't control this at all, but we're not avoid to move the GDT, IDT, LDT etc of a running task while that task is in user mode.  We do move the LDT, but that's quite thoroughly synchronized via IPI.  (Should probably be double checked.  I wrote that code, but that doesn't mean I remember it exactly.)

3. Speculative TLB fills.  We can't control this at all.  We have had actual machine checks, on AMD IIRC, due to messing this up.  This is why we can't defer a flush after freeing a page table.

4. Speculative or other nonarchitectural loads.  One would hope that these are not dangerous.  For example, an early version of TDX would machine check if we did a speculative load from TDX memory, but that was fixed.  I don't see why this would be materially different between actual userspace execution (without LASS, anyway), kernel asm, and kernel C.

5. Writes to page table dirty bits.  I don't think we use these.

In any case, the current implementation in your series is really, really, utterly horrifically slow.  It's probably fine for a task that genuinely sits in usermode forever, but I don't think it's likely to be something that we'd be willing to enable for normal kernels and normal tasks.  And it would be really nice for the don't-interrupt-user-code still to move toward being always available rather than further from it.


I admit that I'm kind of with dhansen: Zen 3+ can use INVLPGB and doesn't need any of this.  Some Intel CPUs support RAR and will eventually be able to use RAR, possibly even for sync_core().

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ