lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87tulzgur1.ffs@nanos.tec.linutronix.de>
Date:   Tue, 15 Jun 2021 14:47:14 +0200
From:   Thomas Gleixner <tglx@...utronix.de>
To:     Borislav Petkov <bp@...e.de>
Cc:     LKML <linux-kernel@...r.kernel.org>,
        Andy Lutomirski <luto@...nel.org>,
        Dave Hansen <dave.hansen@...ux.intel.com>,
        Fenghua Yu <fenghua.yu@...el.com>,
        Tony Luck <tony.luck@...el.com>,
        Yu-cheng Yu <yu-cheng.yu@...el.com>,
        Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
        Peter Zijlstra <peterz@...radead.org>,
        Kan Liang <kan.liang@...ux.intel.com>
Subject: Re: [patch V2 02/52] x86/fpu: Fix copy_xstate_to_kernel() gap handling

On Tue, Jun 15 2021 at 13:07, Borislav Petkov wrote:
> On Mon, Jun 14, 2021 at 05:44:10PM +0200, Thomas Gleixner wrote:
>>   2) Keeping track of the last copied state in the target buffer and
>>      explicitely zero it when there is a feature or alignment gap.
>
> WARNING: 'explicitely' may be misspelled - perhaps 'explicitly'?
> #93: 
>      explicitely zero it when there is a feature or alignment gap.
>       ^^^^^^^^^^^

I'll never learn that. /me goes to write some elisp.

>> +	membuf_write(to, from_xstate ? xstate : init_xstate, size);
>
> I wonder - since we're making this code more robust anyway - whether
> we should add an additional assertion here to check whether that
> membuf.left is < size and warn.

Nah. The wonder of membug_write() is that it does not write behind the
end of the buffer which is designed to allow partial reads w/o checking
a gazillion times for return values etc.

>> +
>> +	/* Copy MXCSR when SSE or YMM are set in the feature mask */
>> +	copy_feature(header.xfeatures & (XFEATURE_MASK_SSE | XFEATURE_MASK_YMM),
>> +		     &to, &xsave->i387.mxcsr, &xinit->i387.mxcsr,
>> +		     MXCSR_AND_FLAGS_SIZE);
>
> Yah, this copies a whopping 8 bytes:
>
>         u32                     mxcsr;          /* MXCSR Register State */
>         u32                     mxcsr_mask;     /* MXCSR Mask           */
>
> I know, I know, it was like that before but dammit, that's obscure.

The point is that this gives us the proper init.mxcsr value when SSE and
YMM are not set.

>> +	/* Copy the remaining FP state */
>> +	copy_feature(header.xfeatures & XFEATURE_MASK_FP,
>> +		     &to, &xsave->i387.st_space, &xinit->i387.st_space,
>> +		     sizeof(xsave->i387.st_space));
>> +
>> +	/* Copy the SSE state - shared with YMM */
>> +	copy_feature(header.xfeatures & (XFEATURE_MASK_SSE | XFEATURE_MASK_YMM),
>> +		     &to, &xsave->i387.xmm_space, &xinit->i387.xmm_space,
>> +		     16 * 16);
>
> Why not
> 	sizeof(xsave->i387.xmm_space)

because I missed that.

Thanks,

        tglx


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ