[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1eb55c22-6c90-e1b3-19d4-cb7b2c6fc0dc@intel.com>
Date: Thu, 15 Jun 2017 07:33:18 -0700
From: Dave Hansen <dave.hansen@...el.com>
To: Andy Lutomirski <luto@...nel.org>,
Robert O'Callahan <robert@...llahan.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
X86 ML <x86@...nel.org>
Subject: Re: xgetbv nondeterminism
On 06/14/2017 10:18 PM, Andy Lutomirski wrote:
> Dave, why is XINUSE exposed at all to userspace?
You need it for XSAVEOPT when it is using the init optimization to be
able to tell which state was written and which state in the XSAVE buffer
is potentially stale with respect to what's in the registers. I guess
you can just use XSAVE instead of XSAVEOPT, though.
As you pointed out, if you are using XSAVEC's compaction features by
leaving bits unset in the requested feature bitmap registers, you have
no idea how much data XSAVEC will write, unless you read XINUSE with
XGETBV. But, you can get around *that* by just presizing the XSAVE
buffer to be big.
So, I guess that leaves its use to just figuring out how much XSAVEOPT
(and friends) are going to write.
> To be fair, glibc uses this new XGETBV feature, but I suspect its
> usage is rather dubious. Shouldn't it just do XSAVEC directly rather
> than rolling its own code?
A quick grep through my glibc source only shows XGETBV(0) used which
reads XCR0. I don't see any XGETBV(1) which reads XINUSE. Did I miss it.
Powered by blists - more mailing lists