lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 26 Jan 2015 12:59:29 -0700
From:	Ross Zwisler <ross.zwisler@...ux.intel.com>
To:	Borislav Petkov <bp@...en8.de>
Cc:	"H. Peter Anvin" <h.peter.anvin@...el.com>,
	linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...nel.org>,
	Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH v2 0/2] add support for new persistent memory
 instructions

On Sat, 2015-01-24 at 12:14 +0100, Borislav Petkov wrote:
> On Fri, Jan 23, 2015 at 03:03:41PM -0800, H. Peter Anvin wrote:
> > For the specific case of CLWB, we can use an "m" input rather than a
> > "+m" output, simply because CLWB (or CLFLUSH* used as a standin for CLWB
> > doesn't need to be ordered with respect to loads (whereas CLFLUSH* do).
> 
> Well, we could do something like:
> 
>         volatile struct { char x[64]; } *p = __p;
> 
>         if (static_cpu_has(X86_FEATURE_CLWB))
>                 asm volatile(".byte 0x66,0x0f,0xae,0x30" :: "m" (*p), "a" (p));
>         else
>                 asm volatile(ALTERNATIVE(
>                         ".byte 0x3e; clflush (%[pax])",
>                         ".byte 0x66; clflush (%[pax])", /* clflushopt (%%rax) */
>                         X86_FEATURE_CLFLUSHOPT)
>                         : [p] "+m" (*p)
>                         : [pax] "a" (p));
> 
> which would simplify the alternative macro too.

This is interesting!  I guess I'm confused as to how this solves the ordering
issue, though.  The "m" input vs "+m" output parameter will tell gcc whether
or not the assembly can be reordered at compile time with respect to reads at
that same location, correct?

So if we have an inline function that could either read or write from gcc's
point of view (input vs output parameter, depending on the branch), it seems
like it would be forced to fall back to the most restrictive case (assume it
will write), and not reorder with respect to reads.  If so, you'd end up in
the same place as using "+m" output, only now you've got an additional branch
instead of a 3-way alternative.

Am I misunderstanding this?

> Generated asm looks ok to me (my objdump doesn't know CLWB yet :)):
> 
> 0000000000000aa0 <myclflush>:
>  aa0:   55                      push   %rbp
>  aa1:   48 89 e5                mov    %rsp,%rbp
>  aa4:   eb 0a                   jmp    ab0 <myclflush+0x10>
>  aa6:   48 89 f8                mov    %rdi,%rax
>  aa9:   66 0f ae 30             data16 xsaveopt (%rax)
>  aad:   5d                      pop    %rbp
>  aae:   c3                      retq
>  aaf:   90                      nop
>  ab0:   48 89 f8                mov    %rdi,%rax
>  ab3:   3e 0f ae 38             clflush %ds:(%rax)
>  ab7:   5d                      pop    %rbp
>  ab8:   c3                      retq
> 
> > Should we use an SFENCE as a standin if pcommit is unavailable, in case
> > we end up using CLFLUSHOPT?
> 
> Btw, is PCOMMIT a lightweight SFENCE for this persistent memory aspect
> to make sure stuff has become persistent after executing it? But not all
> stuff like SFENCE so SFENCE is the bigger hammer?

Ah, yep, I definitely need to include an example flow in my commit comments.
:) Here's a snip from my reply to hpa, to save searching:

	Both the flushes (wmb/clflushopt/clflush) and the pcommit are ordered
	by either mfence or sfence.

	An example function that flushes and commits a buffer could look like
	this (based on clflush_cache_range):

	void flush_and_commit_buffer(void *vaddr, unsigned int size)
	{       
		void *vend = vaddr + size - 1;
		
		for (; vaddr < vend; vaddr += boot_cpu_data.x86_clflush_size)
			clwb(vaddr);
		
		/* Flush any possible final partial cacheline */
		clwb(vend);
		
		/* 
		 * sfence to order clwb/clflushopt/clflush cache flushes
		 * mfence via mb() also works 
		 */
		wmb();

		pcommit();

		/* 
		 * sfence to order pcommit
		 * mfence via mb() also works 
		 */
		wmb();
	}


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ