[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <55C3758A.50005@sr71.net>
Date: Thu, 06 Aug 2015 07:56:10 -0700
From: Dave Hansen <dave@...1.net>
To: Ingo Molnar <mingo@...nel.org>,
Linus Torvalds <torvalds@...ux-foundation.org>
CC: Peter Anvin <hpa@...or.com>, Denys Vlasenko <dvlasenk@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
linux-kernel@...r.kernel.org, Andy Lutomirski <luto@...nel.org>,
bp@...en8.de, Peter Zijlstra <a.p.zijlstra@...llo.nl>,
fenghua.yu@...el.com, x86@...nel.org, dave.hansen@...ux.intel.com
Subject: Re: [PATCH] x86, fpu: correct XSAVE xstate size calculation
I think we have three options. Here's some rough pseudo-ish-code to
sketch them out.
/* Option 1, what we have today */
/*
* This breaks if offset[i]+size[i] != offset[i+1]
* or if alignment is in play. Silly hardware breaks
* this today.
*/
for (i = 0; i < nr_xstates; i++) {
if (!enabled_xstate(i))
continue;
total_blob_size += xstate_sizes[i];
}
/* Option 2: search for the end of the last state, probably works, Ingo likes? */
for (i = 0; i < nr_xstates; i++) {
if (cpu_has_xsaves && !enabled_xstate(i))
continue;
end_of_state = xstate_offsets[i] + xstate_sizes[i];
if (xstate_is_aligned[i]) /* currently not implemented */
end_of_state = ALIGN(end_of_state, 64);
if (end_of_state > total_blob_size)
total_blob_size = end_of_state;
}
/* align unconditionally, maybe??? */
total_blob_size = ALIGN(total_blob_size, 64);
/* Double check our obviously bug-free math with what the CPU says */
if (!cpu_has_xsaves)
cpuid(0xD0, 0, &check_total_blob_size, ...);
else
cpuid(0xD0, 1, &check_total_blob_size, ...);
WARN_ON(check_total_blob_size != total_blob_size);
/* Option 3, trust the CPU (what Dave's patch does) */
if (!cpu_has_xsaves)
cpuid(0xD0, 0, &total_blob_size, ...);
else
cpuid(0xD0, 1, &total_blob_size, ...);
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists