lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 7 Apr 2016 17:53:12 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	Andy Lutomirski <luto@...capital.net>
Cc:	Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	Ingo Molnar <mingo@...hat.com>,
	Paul Turner <commonly@...il.com>,
	Andi Kleen <andi@...stfloor.org>, Chris Lameter <cl@...ux.com>,
	Dave Watson <davejwatson@...com>,
	Josh Triplett <josh@...htriplett.org>,
	Linux API <linux-api@...r.kernel.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	Andrew Hunter <ahh@...gle.com>,
	Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [RFC PATCH 0/3] restartable sequences v2: fast user-space percpu
 critical sections

On Thu, Apr 07, 2016 at 08:44:38AM -0700, Andy Lutomirski wrote:
> On Thu, Apr 7, 2016 at 8:24 AM, Peter Zijlstra <peterz@...radead.org> wrote:
> > On Thu, Apr 07, 2016 at 07:35:26AM -0700, Andy Lutomirski wrote:
> >> What I meant was: rather than shoving individual values into the TLABI
> >> thing, shove in a pointer:
> >>
> >> struct commit_info {
> >>   u64 post_commit_rip;
> >>   u32 cpu;
> >>   u64 *event;
> >>   // whatever else;
> >> };
> >>
> >> and then put a commit_info* in TLABI.
> >>
> >> This would save some bytes in the TLABI structure.
> >
> > But would cost us extra indirections. The whole point was getting this
> > stuff at a constant offset from the TLS segment register.
> 
> I don't think the extra indirections would matter much.  The kernel
> would have to chase the pointer, but only in the very rare case where
> it resumes userspace during a commit or on the immediately following
> instruction.

Its about userspace finding these values, not the kernel.

> At the very least, post_commit_rip and the abort address (which I
> forgot about) could both live in a static structure,

Paul keeps the abort address in rcx.

> and shoving a
> pointer to *that* into TLABI space is one store instead of two.

> > Ah, so what happens if the signal happens before the commit but after
> > the load of the seqcount?
> >
> > Then, even if the signal motifies the count, we'll not observe.
> >
> 
> Where exactly?
> 
> In my scheme, nothing except the kernel ever loads the seqcount.  The
> user code generates a fresh value, writes it to memory, and then, just
> before commit, writes that same value to the TLABI area and then
> double-checks that the value it wrote at the beginning is still there.
> 
> If the signal modifies the count, then the user code won't directly
> notice, but prepare_exit_to_usermode on the way out of the signal will
> notice that the (restored) TLABI state doesn't match the counter that
> the signal handler changed and will just to the abort address.


OK, you lost me..  commit looks like:

+       __asm__ __volatile__ goto (
+                       "movq $%l[failed], %%rcx\n"
+                       "movq $1f, %[commit_instr]\n"
+                       "cmpq %[start_value], %[current_value]\n"

If we get preempted/signaled here without the preemption/signal entry
checking for the post_commit_instr, we'll fail hard.

+                       "jnz %l[failed]\n"
+                       "movq %[to_write], (%[target])\n"
+                       "1: movq $0, %[commit_instr]\n"
+         : /* no outputs */
+         : [start_value]"d"(start_value.storage),
+           [current_value]"m"(__rseq_state),
+           [to_write]"r"(to_write),
+           [target]"r"(p),
+           [commit_instr]"m"(__rseq_state.post_commit_instr)
+         : "rcx", "memory"
+         : failed
+       );

Powered by blists - more mailing lists