[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Zeor4DIGj0u6LNIw@google.com>
Date: Thu, 7 Mar 2024 21:04:32 +0000
From: Yosry Ahmed <yosryahmed@...gle.com>
To: Dave Hansen <dave.hansen@...el.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>, Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>, Peter Zijlstra <peterz@...radead.org>,
Andy Lutomirski <luto@...nel.org>, "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>, x86@...nel.org,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH 2/3] x86/mm: make sure LAM is up-to-date during
context switching
On Thu, Mar 07, 2024 at 07:29:34AM -0800, Dave Hansen wrote:
> On 3/7/24 05:39, Yosry Ahmed wrote:
> > - /*
> > - * Read the tlb_gen to check whether a flush is needed.
> > - * If the TLB is up to date, just use it.
> > - * The barrier synchronizes with the tlb_gen increment in
> > - * the TLB shootdown code.
> > - */
> > - smp_mb();
> > - next_tlb_gen = atomic64_read(&next->context.tlb_gen);
> > - if (this_cpu_read(cpu_tlbstate.ctxs[prev_asid].tlb_gen) ==
> > - next_tlb_gen)
> > + if (!need_flush && !need_lam_update)
> > return;
>
> Instead of all this new complexity, why not just inc_mm_tlb_gen() at the
> site where LAM is enabled? That should signal to any CPU thread that
> its TLB is out of date and it needs to do a full CR3 reload.
It's not really a lot of complexity imo, most of the patch is reverting
the if conditions that return if a TLB flush is not needed to have a
single if block that sets need_flush to true. I can split this to a
different patch if it helps review, or I can do something like this to
keep the diff concise:
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 2975d3f89a5de..17ab105522287 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -576,7 +576,7 @@ void switch_mm_irqs_off(struct mm_struct *unused, struct mm_struct *next,
* process. No TLB flush required.
*/
if (!was_lazy)
- return;
+ goto check_return;
/*
* Read the tlb_gen to check whether a flush is needed.
@@ -588,7 +588,7 @@ void switch_mm_irqs_off(struct mm_struct *unused, struct mm_struct *next,
next_tlb_gen = atomic64_read(&next->context.tlb_gen);
if (this_cpu_read(cpu_tlbstate.ctxs[prev_asid].tlb_gen) ==
next_tlb_gen)
- return;
+ goto check_return;
/*
* TLB contents went out of date while we were in lazy
@@ -596,6 +596,9 @@ void switch_mm_irqs_off(struct mm_struct *unused, struct mm_struct *next,
*/
new_asid = prev_asid;
need_flush = true;
+check_return:
+ if (!need_flush && /* LAM up-to-date */)
+ return;
} else {
/*
* Apply process to process speculation vulnerability
.but I prefer the current patch though to avoid the goto. I think the
flow is easier to follow.
I thought about doing inc_mm_tlb_gen() when LAM is updated, but it felt
hacky and more importantly doesn't make it clear in switch_mm_irqs_off()
that we correctly handle LAM updates. We can certainly add a comment,
but I think an explicit check for CPU LAM vs. mm LAM is much clearer.
WDYT?
>
> Also, have you been able to actually catch this scenario in practice?
Nope, by code inspection (as I admitted in the commit log).
> Considering how fun this code path is, a little effort at an actual
> reproduction would be really appreciated.
I tried reproducing it but gave up quickly. We need a certain sequence
of events to happen:
CPU 1 CPU 2
kthread_use_mm()
/* user thread enables LAM */
context_switch()
context_switch() /* to user thread */
IIUC we don't really need kthread_use_mm(), we need the kthread to be
scheduled on the CPU directly after the user thread, so maybe something
like:
CPU 1 CPU 2
/* user thread running */
context_switch() /* to kthread */
/* user thread enables LAM */
context_switch()
context_switch() /* to user thread */
It doesn't seem easy to reproduce. WDYT?
Powered by blists - more mailing lists