lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250611102010.1bf7c264@batman.local.home>
Date: Wed, 11 Jun 2025 10:20:10 -0400
From: Steven Rostedt <rostedt@...dmis.org>
To: "Masami Hiramatsu (Google)" <mhiramat@...nel.org>
Cc: Peter Zijlstra <peterz@...radead.org>, Ingo Molnar <mingo@...nel.org>,
 Thomas Gleixner <tglx@...utronix.de>, Borislav Petkov <bp@...en8.de>, Dave
 Hansen <dave.hansen@...ux.intel.com>, x86@...nel.org, Naresh Kamboju
 <naresh.kamboju@...aro.org>, open list <linux-kernel@...r.kernel.org>,
 Linux trace kernel <linux-trace-kernel@...r.kernel.org>,
 lkft-triage@...ts.linaro.org, Stephen Rothwell <sfr@...b.auug.org.au>, Arnd
 Bergmann <arnd@...db.de>, Dan Carpenter <dan.carpenter@...aro.org>, Anders
 Roxell <anders.roxell@...aro.org>, Peter Zijlstra <peterz@...radead.org>,
 Ingo  Molnar <mingo@...nel.org>, Thomas Gleixner <tglx@...utronix.de>,
 Borislav  Petkov <bp@...en8.de>, Dave Hansen <dave.hansen@...ux.intel.com>,
 x86@...nel.org
Subject: Re: [RFC PATCH 2/2] x86: alternative: Invalidate the cache for
 updated instructions


[ I just noticed that you continued on the thread without the x86 folks Cc ]

On Wed, 11 Jun 2025 19:26:10 +0900
Masami Hiramatsu (Google) <mhiramat@...nel.org> wrote:

> On Tue, 10 Jun 2025 11:50:30 -0400
> Steven Rostedt <rostedt@...dmis.org> wrote:
> 
> > On Tue, 10 Jun 2025 23:47:48 +0900
> > "Masami Hiramatsu (Google)" <mhiramat@...nel.org> wrote:
> >   
> > > Maybe one possible scenario is to hit the int3 after the third step
> > > somehow (on I-cache).
> > > 
> > > ------
> > > <CPU0>					<CPU1>
> > > 					Start smp_text_poke_batch_finish().
> > > 					Start the third step. (remove INT3)
> > > 					on_each_cpu(do_sync_core)
> > > do_sync_core(do SERIALIZE)
> > > 					Finish the third step.
> > > Hit INT3 (from I-cache)
> > > 					Clear text_poke_array_refs[cpu0]
> > > Start smp_text_poke_int3_handler()  
> > 
> > I believe your analysis is the issue here. The commit that changed the ref
> > counter from a global to per cpu didn't cause the issue, it just made the
> > race window bigger.
> >   
> 
> Ah, OK. It seems more easier to explain. Since we use the
> trap gate for #BP, it does not clear the IF automatically.
> Thus there is a time window between executing INT3 on icache
> (or already in the pipeline) and its handler disables
> interrupts. If the IPI is received in the time window,
> this bug happens.
> 
> <CPU0>					<CPU1>
> 					Start smp_text_poke_batch_finish().
> 					Start the third step. (remove INT3)
> Hit INT3 (from icache/pipeline)
> 					on_each_cpu(do_sync_core)
> ----
> do_sync_core(do SERIALIZE)
> ----
> 					Finish the third step.
> Handle #BP including CLI
> 					Clear text_poke_array_refs[cpu0]
> preparing stack
> Start smp_text_poke_int3_handler()
> Failed to get text_poke_array_refs[cpu0]
> 
> In this case, per-cpu text_poke_array_refs will make a time
> window bigger because clearing text_poke_array_refs is faster.
> 
> If this is correct, flushing cache does not matter (it
> can make the window smaller.)
> 
> One possible solution is to send IPI again which ensures the
> current #BP handler exits. It can make the window small enough.
> 
> Another solution is removing WARN_ONCE() from [1/2], which
> means we accept this scenario, but avoid catastrophic result.

If interrupts are enabled when the break point hits and just enters the
int3 handler, does that also mean it can schedule?

If that's the case, then we either have to remove the WARN_ONCE() or we
would have to do something like a synchronize_rcu_tasks().

-- Steve

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ