lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160116104507.GB31869@pd.tnic>
Date:	Sat, 16 Jan 2016 11:45:07 +0100
From:	Borislav Petkov <bp@...en8.de>
To:	Aravind Gopalakrishnan <Aravind.Gopalakrishnan@....com>
Cc:	tony.luck@...el.com, tglx@...utronix.de, mingo@...hat.com,
	hpa@...or.com, x86@...nel.org, linux-edac@...r.kernel.org,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH V2 4/5] x86/mcheck/AMD: Fix LVT offset configuration for
 thresholding

On Fri, Jan 15, 2016 at 05:50:35PM -0600, Aravind Gopalakrishnan wrote:
> For processor families with  SMCA feature, the LVT offset
> for threshold interrupts is configured only in MSR 0xC0000410
> and not in each per bank MISC register as was done in earlier
> families.
> 
> Fixing the code here to obtain the LVT offset from the correct
> MSR for those families which have SMCA feature enabled.
> 
> Signed-off-by: Aravind Gopalakrishnan <Aravind.Gopalakrishnan@....com>
> ---
>  arch/x86/kernel/cpu/mcheck/mce_amd.c | 34 +++++++++++++++++++++++++++++++++-
>  1 file changed, 33 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/kernel/cpu/mcheck/mce_amd.c b/arch/x86/kernel/cpu/mcheck/mce_amd.c
> index e650fdc..29a7688 100644
> --- a/arch/x86/kernel/cpu/mcheck/mce_amd.c
> +++ b/arch/x86/kernel/cpu/mcheck/mce_amd.c
> @@ -49,6 +49,15 @@
>  #define DEF_LVT_OFF		0x2
>  #define DEF_INT_TYPE_APIC	0x2
>  
> +/*
> + * SMCA settings:
> + * The following defines provide masks or bit positions of
> + * MSRs that are applicable only to SMCA enabled processors
> + */
> +
> +/* Threshold LVT offset is at MSR0xC0000410[15:12] */
> +#define SMCA_THR_LVT_OFF	0xF000
> +
>  static const char * const th_names[] = {
>  	"load_store",
>  	"insn_fetch",
> @@ -143,6 +152,15 @@ static int lvt_off_valid(struct threshold_block *b, int apic, u32 lo, u32 hi)
>  	}
>  
>  	if (apic != msr) {
> +		/*
> +		 * For SMCA enabled processors, LVT offset is programmed at
> +		 * different MSR and BIOS provides the value.
> +		 * The original field where LVT offset was set is Reserved.
> +		 * So, return early here.
> +		 */
> +		if (mce_flags.smca)
> +			return 0;
> +
>  		pr_err(FW_BUG "cpu %d, invalid threshold interrupt offset %d "
>  		       "for bank %d, block %d (MSR%08X=0x%x%08x)\n",
>  		       b->cpu, apic, b->bank, b->block, b->address, hi, lo);
> @@ -301,7 +319,21 @@ void mce_amd_feature_init(struct cpuinfo_x86 *c)
>  				goto init;
>  
>  			b.interrupt_enable = 1;
> -			new	= (high & MASK_LVTOFF_HI) >> 20;
> +
> +			if (mce_flags.smca) {
> +				u32 smca_low = 0, smca_high = 0;

Those variables don't need to be initialized to 0 since you're reading
into them right afterwards.

I fixed that up.



> +
> +				/* Gather LVT offset for thresholding */
> +				if (rdmsr_safe(MSR_CU_DEF_ERR,
> +					       &smca_low,
> +					       &smca_high))
> +					break;
> +

-- 
Regards/Gruss,
    Boris.

ECO tip #101: Trim your mails when you reply.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ