lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YMtj4Xoo5lh4fGSx@hirez.programming.kicks-ass.net>
Date:   Thu, 17 Jun 2021 17:01:53 +0200
From:   Peter Zijlstra <peterz@...radead.org>
To:     Andy Lutomirski <luto@...nel.org>
Cc:     Mark Rutland <mark.rutland@....com>,
        "Russell King (Oracle)" <linux@...linux.org.uk>,
        the arch/x86 maintainers <x86@...nel.org>,
        Dave Hansen <dave.hansen@...el.com>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
        Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
        Nicholas Piggin <npiggin@...il.com>,
        linux-arm-kernel@...ts.infradead.org
Subject: Re: [PATCH 7/8] membarrier: Remove arm (32) support for SYNC_CORE

On Thu, Jun 17, 2021 at 07:00:26AM -0700, Andy Lutomirski wrote:
> On Thu, Jun 17, 2021, at 6:51 AM, Mark Rutland wrote:

> > It's not clear to me what "the right thing" would mean specifically, and
> > on architectures with userspace cache maintenance JITs can usually do
> > the most optimal maintenance, and only need help for the context
> > synchronization.
> > 
> 
> This I simply don't believe -- I doubt that any sane architecture
> really works like this.  I wrote an email about it to Intel that
> apparently generated internal discussion but no results.  Consider:
> 
> mmap(some shared library, some previously unmapped address);
> 
> this does no heavyweight synchronization, at least on x86.  There is
> no "serializing" instruction in the fast path, and it *works* despite
> anything the SDM may or may not say.

I'm confused; why do you think that is relevant?

The only way to get into a memory address space is CR3 write, which is
serializing and will flush everything. Since there wasn't anything
mapped, nothing could be 'cached' from that location.

So that has to work...

> We can and, IMO, should develop a sane way for user programs to
> install instructions into VMAs, for security-conscious software to
> verify them (by splitting the read and write sides?), and for their
> consumers to execute them, without knowing any arch details.  And I
> think this can be done with no IPIs except for possible TLB flushing
> when needed, at least on most architectures.  It would require a
> nontrivial amount of design work, and it would not resemble
> sys_cacheflush() or SYNC_CORE.

The interesting use-case is where we modify code that is under active
execution in a multi-threaded process; where CPU0 runs code and doesn't
make any syscalls at all, while CPU1 modifies code that is visible to
CPU0.

In that case CPU0 can have various internal state still reflecting the
old instructions that no longer exist in memory -- presumably.

We also need to inject at least a full memory barrier and pipeline
flush, to create a proper before and after modified. To reason about
when the *other* threads will be able to observe the new code.

Now, the SDM documents that prefetch and trace buffers are not flushed
on i$ invalidate (actual implementations might of course differ) and
doing this requires the SERIALIZE instruction or one of the many
instructions that implies this, one of which is IRET.

For the cross-modifying case, I really don't see how you can not send an
IPI and expect behavour one can reason about, irrespective of any
non-coherent behaviour.

Now, the SDM documents non-coherent behaviour and requires SERIALIZE,
while at the same time any IPI already implies IRET which implies
SERIALIZE -- except some Luto guy was having plans to optimize the IRET
paths so we couldn't rely on that.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ