lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 26 Jan 2015 12:51:39 -0700
From:	Ross Zwisler <ross.zwisler@...ux.intel.com>
To:	"H. Peter Anvin" <h.peter.anvin@...el.com>
Cc:	linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...nel.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	Borislav Petkov <bp@...en8.de>
Subject: Re: [PATCH v2 0/2] add support for new persistent memory
 instructions

On Fri, 2015-01-23 at 15:03 -0800, H. Peter Anvin wrote:
> On 01/23/2015 12:40 PM, Ross Zwisler wrote:
> > This patch set adds support for two new persistent memory instructions, pcommit
> > and clwb.  These instructions were announced in the document "Intel
> > Architecture Instruction Set Extensions Programming Reference" with reference
> > number 319433-022.
> > 
> > https://software.intel.com/sites/default/files/managed/0d/53/319433-022.pdf
> > 
> 
> Please explain in these patch descriptions what the instructions
> actually do.

Sure, will do.

> +	volatile struct { char x[64]; } *p = __p;
> +
> +	asm volatile(ALTERNATIVE_2(
> +		".byte " __stringify(NOP_DS_PREFIX) "; clflush (%[pax])",
> +		".byte 0x66; clflush (%[pax])", /* clflushopt (%%rax) */
> +		X86_FEATURE_CLFLUSHOPT,
> +		".byte 0x66, 0x0f, 0xae, 0x30",  /* clwb (%%rax) */
> +		X86_FEATURE_CLWB)
> +		: [p] "+m" (*p)
> +		: [pax] "a" (p));
> 
> For the specific case of CLWB, we can use an "m" input rather than a
> "+m" output, simply because CLWB (or CLFLUSH* used as a standin for CLWB
> doesn't need to be ordered with respect to loads (whereas CLFLUSH* do).
> 
> Now, one can argue that for performance reasons we should should still
> use "+m" in case we use the CLFLUSH* standin, to avoid flushing a cache
> line to memory just to bring it back in.

Understood, and an interesting point.  It seems like we can be correct using
either, yea?  I guess I'm happy with "+m" output since it's consistent with
clflush and clflushopt, and since we avoid the clflush* then read issue.
Please let me know if you have a preference.

> +static inline void pcommit(void)
> +{
> +	alternative(ASM_NOP4, ".byte 0x66, 0x0f, 0xae, 0xf8",
> +		    X86_FEATURE_PCOMMIT);
> +}
> +
> 
> Should we use an SFENCE as a standin if pcommit is unavailable, in case
> we end up using CLFLUSHOPT?

Ah, sorry, I really need to include an example flow in my patch descriptions
to make this more clear. :)

Both the flushes (wmb/clflushopt/clflush) and the pcommit are ordered by
either mfence or sfence.

An example function that flushes and commits a buffer could look like this
(based on clflush_cache_range):

void flush_and_commit_buffer(void *vaddr, unsigned int size)
{       
        void *vend = vaddr + size - 1;
        
        for (; vaddr < vend; vaddr += boot_cpu_data.x86_clflush_size)
                clwb(vaddr);
        
        /* Flush any possible final partial cacheline */
        clwb(vend);
        
        /* 
         * sfence to order clwb/clflushopt/clflush cache flushes
         * mfence via mb() also works 
         */
        wmb();

        pcommit();

        /* 
         * sfence to order pcommit
         * mfence via mb() also works 
         */
        wmb();
}

In this example function I don't begin with a fence because clwb (which may
fall back to clflush/clflushopt) will be ordered with respect to either writes
or reads and writes depending on whether the argument is given as an input or
output parameter.

If the platform doesn't support PCOMMIT, you end up with this:

void flush_and_commit_buffer(void *vaddr, unsigned int size)
{       
        void *vend = vaddr + size - 1;
        
        for (; vaddr < vend; vaddr += boot_cpu_data.x86_clflush_size)
                clwb(vaddr);
        
        /* Flush any possible final partial cacheline */
        clwb(vend);
        
        /* 
         * sfence to order clwb/clflushopt/clflush cache flushes
         * mfence via mb() also works 
         */
        wmb();

        nop(); /* from pcommit(), via alternatives */

        /* 
         * sfence to order pcommit
         * mfence via mb() also works 
         */
        wmb();
}

This is fine, but now you've got two fences in a row.  Another slightly more
messy choice would be to include the fence in the pcommit assembly, so
you either get pcommit + sfence or a pair of NOPs.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ