[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <e561e7a3-32c7-dac5-053d-47a0484b26c8@intel.com>
Date: Thu, 24 Sep 2020 08:07:44 -0700
From: Dave Jiang <dave.jiang@...el.com>
To: Borislav Petkov <bp@...en8.de>
Cc: vkoul@...nel.org, tglx@...utronix.de, mingo@...hat.com,
dan.j.williams@...el.com, tony.luck@...el.com, jing.lin@...el.com,
ashok.raj@...el.com, sanjay.k.kumar@...el.com,
fenghua.yu@...el.com, kevin.tian@...el.com,
David.Laight@...LAB.COM, dmaengine@...r.kernel.org,
linux-kernel@...r.kernel.org, Michael Matz <matz@...e.de>
Subject: Re: [PATCH v5 1/5] x86/asm: Carve out a generic movdir64b() helper
for general usage
On 9/24/2020 6:07 AM, Borislav Petkov wrote:
> On Wed, Sep 23, 2020 at 04:10:43PM -0700, Dave Jiang wrote:
>> +/* The dst parameter must be 64-bytes aligned */
>> +static inline void movdir64b(void *dst, const void *src)
>> +{
>> + /*
>> + * Note that this isn't an "on-stack copy", just definition of "dst"
>> + * as a pointer to 64-bytes of stuff that is going to be overwritten.
>> + * In the MOVDIR64B case that may be needed as you can use the
>> + * MOVDIR64B instruction to copy arbitrary memory around. This trick
>> + * lets the compiler know how much gets clobbered.
>> + */
>> + volatile struct { char _[64]; } *__dst = dst;
>> +
>> + /* MOVDIR64B [rdx], rax */
>> + asm volatile(".byte 0x66, 0x0f, 0x38, 0xf8, 0x02"
>> + :
>> + : "m" (*(struct { char _[64];} **)src), "a" (__dst)
>> + : "memory");
>> +}
>
> Ok, Micha and I hashed it out on IRC, here's what you do. Please keep
> the comments too because we will forget soon again.
>
> static inline void movdir64b(void *__dst, const void *src)
> {
> struct { char _[64]; } *__src = src;
> struct { char _[64]; } *__dst = dst;
>
> /*
> * MOVDIR64B %(rdx), rax.
> *
> * Both __src and __dst must be memory constraints in order to tell the
> * compiler that no other memory accesses should be reordered around
> * this one.
> *
> * Also, both must be supplied as lvalues because this tells
> * the compiler what the object is (its size) the instruction accesses.
> * I.e., not the pointers but what they point, thus the deref'ing '*'.
> */
> asm volatile(".byte 0x66, 0x0f, 0x38, 0xf8, 0x02"
> : "+m" (*__dst)
> : "m" (*__src), "a" (__dst), "d" (__src));
> }
Thanks Boris. I will update and resend.
>
> Thx.
>
Powered by blists - more mailing lists