lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LFD.2.02.1205232044150.3231@ionos>
Date:	Wed, 23 May 2012 20:58:40 +0200 (CEST)
From:	Thomas Gleixner <tglx@...utronix.de>
To:	"Luck, Tony" <tony.luck@...el.com>
cc:	Chen Gong <gong.chen@...ux.intel.com>,
	"bp@...64.org" <bp@...64.org>, "x86@...nel.org" <x86@...nel.org>,
	LKML <linux-kernel@...r.kernel.org>,
	Peter Zijlstra <peterz@...radead.org>
Subject: RE: [PATCH] x86: auto poll/interrupt mode switch for CMC to stop
 CMC storm

On Wed, 23 May 2012, Luck, Tony wrote:

> > What's the point of doing this work? Why can't we just do that on the
> > CPU which got hit by the MCE storm and leave the others alone? They
> > either detect it themself or are just not affected. 
> 
> CMCI gets broadcast to all threads on a socket. So
> if one cpu has a problem, many cpus have a problem :-(
> Some machine check banks are local to a thread/core,
> so we need to make sure that the CMCI gets taken by
> someone who can actually see the bank with the problem.
> The others are collateral damage - but this means there
> is even more reason to do something about a CMCI storm
> as the effects are not localized.

Thanks for the explanation. That should have been part of the
patch/changelog.

But there are a few questions left:

If I understand correctly, the CMCI gets broadcast to all threads on a
socket, but only one handles it. So if it's the wrong one (not seing
the local bank of the affected one) then you get that storm
behaviour. So you have to switch all of them to polling mode in order
to get to the root cause of the CMCI.

If that's the case, then I really can't understand the 5 CMCIs per
second treshold for defining the storm and switching to poll mode.
I'd rather expect 5 of them in a row.

Confused.

> > What's wrong with doing that strictly per cpu and avoid the whole
> > global state horror?
> 
> Is that less of a horror? We'd have some cpus polling and some
> taking CMCI (in somewhat arbitrary and ever changing combinations).
> I'm not sure which is less bad.

It's definitely less horrible than an implementation which allows
arbitrary disable/enable work scheduled. It really depends on how the
hardware really works, which I have not fully understood yet.

Thanks,

	tglx
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ