lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Zr6MjWLD5VQAQHAV@ghost>
Date: Thu, 15 Aug 2024 16:17:33 -0700
From: Charlie Jenkins <charlie@...osinc.com>
To: Andrea Parri <parri.andrea@...il.com>
Cc: Paul Walmsley <paul.walmsley@...ive.com>,
	Palmer Dabbelt <palmer@...belt.com>,
	Albert Ou <aou@...s.berkeley.edu>,
	Alexandre Ghiti <alexghiti@...osinc.com>,
	Atish Patra <atishp@...osinc.com>,
	Samuel Holland <samuel.holland@...ive.com>,
	Palmer Dabbelt <palmer@...osinc.com>,
	linux-riscv@...ts.infradead.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] riscv: Eagerly flush in flush_icache_deferred()

On Thu, Aug 15, 2024 at 02:34:19AM +0200, Andrea Parri wrote:
> > <---- Thread 1 starts running here on CPU1
> > 
> > <---- Thread 2 starts running here with same mm
> > 
> > T2: prctl(PR_RISCV_SET_ICACHE_FLUSH_CTX, PR_RISCV_CTX_SW_FENCEI_ON, PR_RISCV_SCOPE_PER_PROCESS);
> > T2: <-- kernel sets current->mm->context.force_icache_flush to true
> 
> Mmh, TBH, I'm not sure how this patch is supposed to fix the race in
> question:
> 
> For once, AFAIU, the operation
> 
> T2: cpumask_setall(&current->mm->context.icache_stale_mask)
> 
> (on CPU2?) you've added with this patch...
> 
> 
> > T2: <modification of instructions>
> > T2: fence.i
> > 
> > T1: fence.i (to synchronize with other thread, has some logic to
> >              determine when to do this)
> > T1: <-- thread 1 is preempted
> > T1: <-- thread 1 is placed onto CPU3 and starts context switch sequence
> > T1 (kernel): flush_icache_deferred() -> skips flush because switch_to_should_flush_icache() returns true
> 
> ... does _not_ ensure that T1: flush_icache_deferred() on CPU3 will
> observe/read from that operation: IOW,
> 
> T1: cpumask_test_and_clear_cpu(cpu, &mm->context.icache_stale_mask)
> 
> may still evaluate to FALSE, thus preventing the FENCE.I execution.
> 
> Moreover, AFAIU, ...
> 
> 
> > 				     -> thread has migrated and task->mm->context.force_icache_flush is true
> > 
> > T2: prctl(PR_RISCV_SET_ICACHE_FLUSH_CTX, PR_RISCV_CTX_SW_FENCEI_OFF, PR_RISCV_SCOPE_PER_PROCESS);
> 
> ... moving the operation(s)
> 
> T2: set_icache_stale_mask()
> 	T2: cpumask_setall(&current->mm->context.icache_stale_mask)
> 
> before the following operation...  (per patch #1)
> 
> 
> > T2 (kernel): kernel sets current->mm->context.force_icache_flush = false
> > 
> > T1 (kernel): switch_to() calls switch_to_should_flush_icache() which now
> > 	     returns false because task->mm->context.force_icache_flush
> > 	     is false due to the other thread emitting
> > 	     PR_RISCV_CTX_SW_FENCEI_OFF.
> 
> ... does not ensure that T1: switch_to_should_flush_icache() on CPU3
> will observe
> 
> T1: cpumask_test_cpu(<cpu3>, &task->mm->context.icache_stale_mask) == true
> 
> in fact, T1 may evaluate the latter expression to FALSE while still
> being able to observe the "later" operation, i.e.
> 
> T1: task->mm->context.force_icache_flush == false
> 
> 
> Perhaps a simplified but useful way to look at such scenarios is as
> follows:
> 
>   - CPUs are just like nodes of a distributed system, and
> 
>   - store are like messages to be exchanged (by the memory subsystem)
>     between CPUs: without some (explicit) synchronization/constraints,
>     messages originating from a given CPU can propagate/be visible to
>     other CPUs at any time and in any order.
> 
> 
> IAC, can you elaborate on the solution proposed here (maybe by adding
> some inline comments), keeping the above considerations in mind? what
> am I missing?

I should have added some memory barriers. I want to have the stores to
task->mm->context.force_icache_flush and
task->mm->context.icache_stale_mask in riscv_set_icache_flush_ctx() from
one hart to be visible by another hart that is observing the values in
flush_icache_deferred() and switch_to_should_flush_icache(). Then also
for the changes to those variables in flush_icache_deferred() and
switch_to_should_flush_icache() to be visible in future invocations of
those functions.

> 
> 
> > T1 (back in userspace): Instruction cache was never flushed on context
> > 			switch to CPU3, and thus may execute incorrect
> > 			instructions.
> 
> Mmh, flushing the I$ (or, as meant here, executing a FENCE.I) seems
> to be only half of the solution: IIUC, we'd like to ensure that the
> store operation
> 
> T2: <modification of instructions>
> 
> originated from CPU2 is _visible_ to CPU3 by the time that FENCE.I
> instruction is executed, cf.
> 
> [from Zifencei - emphasis mine]
> 
> A FENCE.I instruction ensures that a subsequent instruction fetch on
> a RISC-V hart will see any previous data stores _already visible_ to
> the same RISC-V hart.
>

Oh okay so we will need to do a memory barrier before the fence.i in the
userspace program. I don't believe a memory barrier will be necessary in
the kernel though while this prctl is active, will the kernel ensure
memory coherence upon migration?

> 
> IOW (but assuming code is in coherent main memory), imagine that the
> (putative) FENCE.I on CPU3 is replaced by some
> 
> T1: LOAD reg,0(insts_addr)
> 
> question is: would such a load be guaranteed to observe the store
> 
> T2: <modification of instructions>  #  STORE new_insts,0(insts_addr)
> 
> originated from CPU2? can you elaborate?

To give a broader overview, the usecase for the per-mm mode of the prctl
is to support JIT languages that generate code sequences on one hart and
execute the generated code on another hart. In this example, thread 2 is
generating the code sequences and thread 1 is executing them.

The goal here is for Linux to guarantee that CPU migration does not
cause thread 1 to not see the instructions generated by thread 2, if it
was able to see the generated instructions on the CPU it was migrating
from. To hold this guarantee, when thread 1 (or any thread that is in
the mm group) is migrated, its instruction cache is synchronized.
Ideally, it would contain exactly the same contents as it did on the
previous CPU, but instead it must rely on fence.i since that is the only
option.

The stipulation of "if it was able to see the generated instructions on
the CPU it was migration from" is there to say that the thread is
expected to emit fence.i as necessary to cover the case that migration
does not occur.

Another note is that with this prctl, fence.i is emitted by the kernel
whenever any thread in the mm group is migrated, however the described
usecase is that there is one producer of the instructions and one
consumer. An extension to this prctl could be to specify which threads
are consumers and which threads are producers and only flush the icache
when the consumers have migrated but that optimization has not yet be
written.

- Charlie

> 
>   Andrea

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ