lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Fri, 11 Jun 2021 12:16:43 -0400 From: "Liang, Kan" <kan.liang@...ux.intel.com> To: Thomas Gleixner <tglx@...utronix.de>, LKML <linux-kernel@...r.kernel.org> Cc: Peter Zijlstra <peterz@...radead.org>, x86@...nel.org Subject: Re: [PATCH] perf/x86/intel/lbr: Zero the xstate buffer on allocation On 6/11/2021 9:03 AM, Thomas Gleixner wrote: > XRSTORS requires a valid xstate buffer to work correctly. XSAVES does not > guarantee to write a fully valid buffer according to the SDM: > > "XSAVES does not write to any parts of the XSAVE header other than the > XSTATE_BV and XCOMP_BV fields." > > XRSTORS triggers a #GP: > > "If bytes 63:16 of the XSAVE header are not all zero." > > It's dubious at best how this can work at all when the buffer is not zeroed > before use. > I didn't run into any issues when I did the test. I guess the reserved bits in the buffer may always happen to be 0. That's why it didn't set off the alarm to me. Thank you very much for the fix. Thanks, Kan > Allocate the buffers with __GFP_ZERO to prevent XRSTORS failure. > > Fixes: ce711ea3cab9 ("perf/x86/intel/lbr: Support XSAVES/XRSTORS for LBR context switch") > Signed-off-by: Thomas Gleixner <tglx@...utronix.de> > Cc: Kan Liang <kan.liang@...ux.intel.com> > Cc: Peter Zijlstra <peterz@...radead.org> > Cc: x86@...nel.org > --- > arch/x86/events/intel/lbr.c | 3 ++- > 1 file changed, 2 insertions(+), 1 deletion(-) > > --- a/arch/x86/events/intel/lbr.c > +++ b/arch/x86/events/intel/lbr.c > @@ -731,7 +731,8 @@ void reserve_lbr_buffers(void) > if (!kmem_cache || cpuc->lbr_xsave) > continue; > > - cpuc->lbr_xsave = kmem_cache_alloc_node(kmem_cache, GFP_KERNEL, > + cpuc->lbr_xsave = kmem_cache_alloc_node(kmem_cache, > + GFP_KERNEL | __GFP_ZERO, > cpu_to_node(cpu)); > } > } >
Powered by blists - more mailing lists