lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sun, 19 Apr 2020 05:02:01 +0000
From:   Alex Belits <abelits@...vell.com>
To:     "mark.rutland@....com" <mark.rutland@....com>
CC:     "mingo@...nel.org" <mingo@...nel.org>,
        "davem@...emloft.net" <davem@...emloft.net>,
        "linux-api@...r.kernel.org" <linux-api@...r.kernel.org>,
        "rostedt@...dmis.org" <rostedt@...dmis.org>,
        "peterz@...radead.org" <peterz@...radead.org>,
        "linux-arch@...r.kernel.org" <linux-arch@...r.kernel.org>,
        Prasun Kapoor <pkapoor@...vell.com>,
        "catalin.marinas@....com" <catalin.marinas@....com>,
        "frederic@...nel.org" <frederic@...nel.org>,
        "tglx@...utronix.de" <tglx@...utronix.de>,
        "will@...nel.org" <will@...nel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "linux-arm-kernel@...ts.infradead.org" 
        <linux-arm-kernel@...ts.infradead.org>
Subject: Re: [EXT] Re: [PATCH v3 03/13] task_isolation: add instruction
 synchronization memory barrier


On Wed, 2020-04-15 at 13:44 +0100, Mark Rutland wrote:
> External Email
> 
> -------------------------------------------------------------------
> ---
> On Thu, Apr 09, 2020 at 03:17:40PM +0000, Alex Belits wrote:
> > Some architectures implement memory synchronization instructions
> > for
> > instruction cache. Make a separate kind of barrier that calls them.
> 
> Modifying the instruction caches requries more than an ISB, and the
> 'IMB' naming implies you're trying to order against memory accesses,
> which isn't what ISB (generally) does.
> 
> What exactly do you want to use this for?

I guess, there should be different explanation and naming.

The intention is to have a separate barrier that causes cache
synchronization event, for use in architecture-independent code. I am
not sure, what exactly it should do to be implemented in architecture-
independent manner, so it probably only makes sense along with a
regular memory barrier.

The particular place where I had to use is the code that has to run
after isolated task returns to the kernel. In the model that I propose
for task isolation, remote context synchronization is skipped while
task is in isolated in userspace (it doesn't run kernel, and kernel
does not modify its userspace code, so it's harmless until entering the
kernel). So it will skip the results of kick_all_cpus_sync() that was
that was called from flush_icache_range() and other similar places.
This means that once it's out of userspace, it should only run
some "safe" kernel entry code, and then synchronize in some manner that
avoids race conditions with possible IPIs intended for context
synchronization that may happen at the same time. My next patch in the
series uses it in that one place.

Synchronization will have to be implemented without a mandatory
interrupt because it may be triggered locally, on the same CPU. On ARM,
ISB is definitely necessary there, however I am not sure, how this
should look like on x86 and other architectures. On ARM this probably
still should be combined with a real memory barrier and cache
synchronization, however I am not entirely sure about details. Would
it make more sense to run DMB, IC and ISB? 

> 
As-is, I don't think this makes sense as a generic barrier.

Thanks,
Mark.

Signed-off-by: Alex Belits <abelits@...vell.com>
---
 arch/arm/include/asm/barrier.h   | 2 ++
 arch/arm64/include/asm/barrier.h | 2 ++
 include/asm-generic/barrier.h    | 4 ++++
 3 files changed, 8 insertions(+)

diff --git a/arch/arm/include/asm/barrier.h
b/arch/arm/include/asm/barrier.h
index 83ae97c049d9..6def62c95937 100644
--- a/arch/arm/include/asm/barrier.h
+++ b/arch/arm/include/asm/barrier.h
@@ -64,12 +64,14 @@ extern void arm_heavy_mb(void);
 #define mb()		__arm_heavy_mb()
 #define rmb()		dsb()
 #define wmb()		__arm_heavy_mb(st)
+#define imb()		isb()
 #define dma_rmb()	dmb(osh)
 #define dma_wmb()	dmb(oshst)
 #else
 #define mb()		barrier()
 #define rmb()		barrier()
 #define wmb()		barrier()
+#define imb()		barrier()
 #define dma_rmb()	barrier()
 #define dma_wmb()	barrier()
 #endif
diff --git a/arch/arm64/include/asm/barrier.h
b/arch/arm64/include/asm/barrier.h
index 7d9cc5ec4971..12a7dbd68bed 100644
--- a/arch/arm64/include/asm/barrier.h
+++ b/arch/arm64/include/asm/barrier.h
@@ -45,6 +45,8 @@
 #define rmb()		dsb(ld)
 #define wmb()		dsb(st)
 
+#define imb()		isb()
+
 #define dma_rmb()	dmb(oshld)
 #define dma_wmb()	dmb(oshst)
 
diff --git a/include/asm-generic/barrier.h b/include/asm-
generic/barrier.h
index 85b28eb80b11..d5a822fb3e92 100644
--- a/include/asm-generic/barrier.h
+++ b/include/asm-generic/barrier.h
@@ -46,6 +46,10 @@
 #define dma_wmb()	wmb()
 #endif
 
+#ifndef imb
+#define imb		barrier()
+#endif
+
 #ifndef read_barrier_depends
 #define read_barrier_depends()		do { } while (0)
 #endif
-- 
2.20.1




Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ