[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4BDF4407.8000503@zytor.com>
Date: Mon, 03 May 2010 14:45:43 -0700
From: "H. Peter Anvin" <hpa@...or.com>
To: Avi Kivity <avi@...hat.com>,
Suresh Siddha <suresh.b.siddha@...el.com>
CC: Brian Gerst <brgerst@...il.com>, Dexuan Cui <dexuan.cui@...el.com>,
Sheng Yang <sheng@...ux.intel.com>,
Ingo Molnar <mingo@...e.hu>, linux-kernel@...r.kernel.org,
kvm@...r.kernel.org
Subject: Re: [PATCH 1/2] x86: eliminate TS_XSAVE
On 05/02/2010 10:44 AM, Avi Kivity wrote:
> On 05/02/2010 08:38 PM, Brian Gerst wrote:
>> On Sun, May 2, 2010 at 10:53 AM, Avi Kivity<avi@...hat.com> wrote:
>>
>>> The fpu code currently uses current->thread_info->status& TS_XSAVE as
>>> a way to distinguish between XSAVE capable processors and older processors.
>>> The decision is not really task specific; instead we use the task status to
>>> avoid a global memory reference - the value should be the same across all
>>> threads.
>>>
>>> Eliminate this tie-in into the task structure by using an alternative
>>> instruction keyed off the XSAVE cpu feature; this results in shorter and
>>> faster code, without introducing a global memory reference.
>>>
>> I think you should either just use cpu_has_xsave, or extend this use
>> of alternatives to all cpu features. It doesn't make sense to only do
>> it for xsave.
>>
>
> I was trying to avoid a performance regression relative to the current
> code, as it appears that some care was taken to avoid the memory reference.
>
> I agree that it's probably negligible compared to the save/restore
> code. If the x86 maintainers agree as well, I'll replace it with
> cpu_has_xsave.
>
I asked Suresh to comment on this, since he wrote the original code. He
did confirm that the intent was to avoid a global memory reference.
-hpa
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists