[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5564A40D.6010002@sr71.net>
Date: Tue, 26 May 2015 09:49:17 -0700
From: Dave Hansen <dave@...1.net>
To: Ingo Molnar <mingo@...nel.org>
CC: linux-kernel@...r.kernel.org, x86@...nel.org, tglx@...utronix.de
Subject: Re: [PATCH 00/19] x86, mpx updates for 4.2 (take 7)
On 05/20/2015 03:05 AM, Ingo Molnar wrote:
>> >
>> > This sees breakage unless either booted with 'noxsaves'
>> > or if it has Fenghua's set from here applied:
>> >
>> > http://lkml.kernel.org/r/1429678319-61356-1-git-send-email-fenghua.yu@intel.com
>> >
>> > This set is also available against 4.1-rc3 in git:
>> >
>> > git://git.kernel.org/pub/scm/linux/kernel/git/daveh/x86-mpx.git mpx-v22
> Yeah, so as a first step, could you please test that the patch below
> solves the crashes as well, without having to specify 'noxsaves' on
> the boot line?
Yes, that does seem to make it happy in lieu of Fenghua's patches.
> + /*
> + * Quirk: we don't yet handle the XSAVES* instructions
> + * correctly, as we don't correctly convert between
> + * standard and compacted format when interfacing
> + * with user-space - so disable it for now.
> + *
> + * The difference is small: with recent CPUs the
> + * compacted format is only marginally smaller than
> + * the standard FPU state format.
> + *
> + * ( This is easy to backport while we are fixing
> + * XSAVES* support. )
> + */
> + setup_clear_cpu_cap(X86_FEATURE_XSAVES);
> }
FWIW, I think it would be prudent to also clear X86_FEATURE_XSAVEC.
All of the issues I am aware of are related to the compact format, not
'xsaves' itself (although 'xsaves' does *use* the compact format of course).
The XSAVEC bit is the one that technically indicates the compact format
support, although I don't think there is any actual use of it in the
kernel at present.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists