[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <034466a4-0917-47c4-934b-9549c3076624@intel.com>
Date: Thu, 7 Mar 2024 09:56:07 -0800
From: Dave Hansen <dave.hansen@...el.com>
To: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Yosry Ahmed <yosryahmed@...gle.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>,
Borislav Petkov <bp@...en8.de>, Peter Zijlstra <peterz@...radead.org>,
Andy Lutomirski <luto@...nel.org>, x86@...nel.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH 2/3] x86/mm: make sure LAM is up-to-date during
context switching
On 3/7/24 09:29, Kirill A. Shutemov wrote:
> On Thu, Mar 07, 2024 at 01:39:15PM +0000, Yosry Ahmed wrote:
>> During context switching, if we are not switching to new mm and no TLB
>> flush is needed, we do not write CR3. However, it is possible that a
>> user thread enables LAM while a kthread is running on a different CPU
>> with the old LAM CR3 mask. If the kthread context switches into any
>> thread of that user process, it may not write CR3 with the new LAM mask,
>> which would cause the user thread to run with a misconfigured CR3 that
>> disables LAM on the CPU.
> I don't think it is possible. As I said we can only enable LAM when the
> process has single thread. If it enables LAM concurrently with kernel
> thread and kernel thread gets control on the same CPU after the userspace
> thread of the same process LAM is already going to be enabled. No need in
> special handling.
I think it's something logically like this:
// main thread
kthread_use_mm()
cr3 |= mm->lam_cr3_mask;
mm->lam_cr3_mask = foo;
cpu_tlbstate.lam = mm->lam_cr3_mask;
Obviously the kthread's LAM state is going to be random. It's
fundamentally racing with the enabling thread. That part is fine.
The main pickle is the fact that CR3 and cpu_tlbstate.lam are out of
sync. That seems worth fixing.
Or is there something else that keeps this whole thing from racing in
the first place?
Powered by blists - more mailing lists