lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 19 Nov 2014 18:29:58 +0000
From:	"Luck, Tony" <tony.luck@...el.com>
To:	Andy Lutomirski <luto@...capital.net>,
	Borislav Petkov <bp@...en8.de>,
	"x86@...nel.org" <x86@...nel.org>,
	Linus Torvalds <torvalds@...ux-foundation.org>
CC:	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"Peter Zijlstra" <peterz@...radead.org>,
	Oleg Nesterov <oleg@...hat.com>,
	Andi Kleen <andi@...stfloor.org>
Subject: RE: [PATCH v3 0/3] Handle IST interrupts from userspace on the
 normal stack

> NB: Tony has seen odd behavior when stress-testing injected
> machine checks with this series applied.  I suspect that
> it's a bug in something else, possibly his BIOS.  Bugs in
> this series shouldn't be ruled out, though.

v3 did 3.5x better than earlier ones ... survived overnight but died at 91724
injection/consumption/recovery cycles just now. Different symptom,
instead of losing some cpus, there was a fatal machine check (PCC=1
and OVER=1 bits set in the machine check bank). This might be from a
known issue.
Not sure if this was due to some improvement in the code, or because
I changed the system configuration by pulling out all the memory except
for that on memory controller 0 on node 0. Our BIOS team had told me
they'd seen some instability in the injection code on fully populated
systems.

I did instrument the synchronization in mce_start(). I was a bit worried
that with ever increasing numbers of cpus the 100ns delay between
pounding on atomic ops on mce_callin might not be enough. But it
seems we are not in trouble yet. Slowest synchronization recorded
took 1.8M TSC cycles. Mean is 500K cycles.  So my gut feeling that
the one second timeout was very conservative is correct.

-Tony
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ