lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Mon, 25 Sep 2023 23:06:55 +0000
From:   "Huang, Kai" <kai.huang@...el.com>
To:     "Li, Xin3" <xin3.li@...el.com>,
        "Compostella, Jeremy" <jeremy.compostella@...el.com>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "x86@...nel.org" <x86@...nel.org>
CC:     "kirill.shutemov@...ux.intel.com" <kirill.shutemov@...ux.intel.com>
Subject: Re: [PATCH] x86/cpu/intel: Fix MTRR verification for TME enabled
 platforms

On Sat, 2023-09-23 at 07:37 +0000, Li, Xin3 wrote:
> Hi Jeremy,
> 
> > diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
> > index be4045628fd3..34c54432bf00 100644
> > --- a/arch/x86/kernel/cpu/intel.c
> > +++ b/arch/x86/kernel/cpu/intel.c
> > @@ -184,6 +184,90 @@ static bool bad_spectre_microcode(struct cpuinfo_x86 *c)
> >  	return false;
> >  }
> > 
> > +#define MSR_IA32_TME_ACTIVATE		0x982
> 
> I know you're just moving the definitions, however we usually define MSRs
> and their bits in arch/x86/include/asm/msr-index.h.
> 
> > +
> > +/* Helpers to access TME_ACTIVATE MSR */
> > +#define TME_ACTIVATE_LOCKED(x)		(x & 0x1)
> > +#define TME_ACTIVATE_ENABLED(x)		(x & 0x2)
> 
> What about:
> 
> #define TME_ACTIVATE_LOCKED(x)		(x & BIT(0))
> #define TME_ACTIVATE_ENABLED(x)		(x & BIT(1))

(Also + Kirill who is the author of detect_tme())

By moving these bit definition to msr-index.h, IMHO we should just define the
macros for the bit positions, but remove the (x) part.  This is consistent with
all other existing definitions.

Something like:

#define TME_ACTIVATE_LOCKED		BIT(0)
...

 
> 
> > +
> > +#define TME_ACTIVATE_POLICY(x)		((x >> 4) & 0xf)	/* Bits 7:4 */
> 
> And:
> 
> /* Bits 7:4 are TME activate policy bits */
> #define TME_ACTIVATE_POLICY_OFFSET		4
> #define TME_ACTIVATE_POLICY_MASK		0xf
> #define TME_ACTIVATE_POLICY(x)						\
> 	((x >> TME_ACTIVATE_POLICY_OFFSET) & TME_ACTIVATE_POLICY_MASK)
> 
> > +#define TME_ACTIVATE_KEYID_BITS(x)	((x >> 32) & 0xf)	/* Bits 35:32 */
> > +
> > +#define TME_ACTIVATE_CRYPTO_ALGS(x)	((x >> 48) & 0xffff)	/* Bits
> > 63:48 */
> 
> ditto
> 
> > @@ -335,6 +419,9 @@ static void early_init_intel(struct cpuinfo_x86 *c)
> >  	 */
> >  	if (detect_extended_topology_early(c) < 0)
> >  		detect_ht_early(c);
> > +
> 
> Please add a comment here explaining why detect_tme() needs to be called
> in early_init_intel().

Also this works because init_intel() also calls early_init_intel().  I noticed
before that people don't like this.  So maybe it's worth adding some extra words
saying this, in case people may want to stop calling early_init_intel() from
init_intel() in the future, but I am not sure whether this is worth pointing
out.

Or, should we change to call detect_tme() from bsp_init_intel() instead of
early_init_intel(), and still keep calling detect_tme() from init_intel()?

This also avoids calling detect_tme() twice for BSP IIUC, because AFAICT
early_init_intel() is called twice for BSP.
  
> 
> > +	if (cpu_has(c, X86_FEATURE_TME))
> > +		detect_tme(c);
> >  }
> > 
> >  static void bsp_init_intel(struct cpuinfo_x86 *c)
> 
> Thanks!
>     Xin

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ