[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150317102048.GG19645@pd.tnic>
Date: Tue, 17 Mar 2015 11:20:49 +0100
From: Borislav Petkov <bp@...en8.de>
To: Aravind Gopalakrishnan <Aravind.Gopalakrishnan@....com>
Cc: tglx@...utronix.de, mingo@...hat.com, hpa@...or.com,
tony.luck@...el.com, slaoub@...il.com, luto@...capital.net,
x86@...nel.org, linux-kernel@...r.kernel.org,
linux-edac@...r.kernel.org
Subject: Re: [PATCH] x86, mce, severities: Add AMD severities function
On Mon, Mar 16, 2015 at 12:16:04PM -0500, Aravind Gopalakrishnan wrote:
> +/* keeping amd_mce_severity in sync with AMD error scope heirarchy table */
> +static int amd_mce_severity(struct mce *m, enum context ctx)
> +{
> + /* Processor Context Corrupt, no need to fumble too much, die! */
> + if (m->status & MCI_STATUS_PCC)
> + return MCE_PANIC_SEVERITY;
> +
> + if (m->status & MCI_STATUS_UC) {
> + /*
> + * On older systems, where overflow_recov flag is not
> + * present, we should simply PANIC if Overflow occurs.
> + * If overflow_recov flag set, then SW can try
> + * to at least kill process to salvage systen operation.
> + */
> +
> + /* at least one error was not logged */
> + if (m->status & MCI_STATUS_OVER && !mce_flags.overflow_recov)
> + return MCE_PANIC_SEVERITY;
> +
> + /* software can try to contain */
> + if (!(m->mcgstatus & MCG_STATUS_RIPV) &&
> + mce_flags.overflow_recov) {
> + if (ctx == IN_KERNEL)
> + return MCE_PANIC_SEVERITY;
we're testing mce_flags.overflow_recov twice here, perhaps do instead:
/*
* < Comment about overflow recovery bit>
*/
if (mce_flags.overflow_recov) {
if (!(m->mcgstatus & MCG_STATUS_RIPV) && (ctx == IN_KERNEL))
return MCE_PANIC_SEVERITY;
} else {
if (m->status & MCI_STATUS_OVER)
return MCE_PANIC_SEVERITY;
}
--
Regards/Gruss,
Boris.
ECO tip #101: Trim your mails when you reply.
--
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists