lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 7 May 2014 18:27:46 +0200
From:	Ingo Molnar <mingo@...nel.org>
To:	Don Zickus <dzickus@...hat.com>
Cc:	x86@...nel.org, Peter Zijlstra <peterz@...radead.org>,
	ak@...ux.intel.com, gong.chen@...ux.intel.com,
	LKML <linux-kernel@...r.kernel.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	Frédéric Weisbecker <fweisbec@...il.com>,
	Steven Rostedt <rostedt@...dmis.org>
Subject: Re: [PATCH 1/5] x86, nmi:  Add new nmi type 'external'


* Don Zickus <dzickus@...hat.com> wrote:

> On Wed, May 07, 2014 at 05:38:54PM +0200, Ingo Molnar wrote:
> > 
> > * Don Zickus <dzickus@...hat.com> wrote:
> > 
> > > I noticed when debugging a perf problem on a machine with GHES enabled,
> > > perf seemed slow.  I then realized that the GHES NMI routine was taking
> > > a global lock all the time to inspect the hardware.  This contended
> > > with all the local perf counters which did not need a lock.  So each cpu
> > > accidentally was synchronizing with itself when using perf.
> > > 
> > > This is because the way the nmi handler works.  It executes all the handlers
> > > registered to a particular subtype (to deal with nmi sharing).  As a result
> > > the GHES handler was executed on every PMI.
> > > 
> > > Fix this by creating a new nmi type called NMI_EXT, which is used by
> > > handlers that need to probe external hardware and require a global lock
> > > to do so.
> > > 
> > > Now the main NMI handler can check the internal NMI handlers first and
> > > then the external ones if nothing is found.
> > > 
> > > This makes perf a little faster again on those machines with GHES enabled.
> > 
> > So what happens if GHES asserts an NMI at the same time a PMI 
> > triggers?
> > 
> > If the perf PMI executes and indicates that it has handled something, 
> > we don't execute the GHES handler, right? Will the GHES re-trigger the 
> > NMI after we return?
> 
> In my head, I had thought they would be queued up and things work 
> out fine. [...]

x86 NMIs are generally edge triggered.

> [...]  But I guess in theory, if a PMI NMI comes in and before the 
> cpu can accept it and GHES NMI comes in, then it would suffice to 
> say it may get dropped.  That would be not be good.  Though the race 
> would be very small.
> 
> I don't have a good idea how to handle that.

Well, are GHES NMIs reasserted if they are not handled? I don't know 
but there's a definite answer to that hardware behavior question.

> On the flip side, we have the same exact problem, today, with the 
> other common external NMIs (SERR, IO).  If a PCI SERR comes in at 
> the same time as a PMI, then it gets dropped.  Worse, it doesn't get 
> re-enabled and blocks future SERRs (just found this out two weeks 
> ago because of a dirty perf status register on boot).
> 
> Again, I don't have a solution to juggle between PMI performance and 
> reliable delivery.  We could do away with the spinlocks and go back 
> to single cpu delivery (like it used to be).  Then devise a 
> mechanism to switch delivery to another cpu upon hotplug.
> 
> Thoughts?

I'd say we should do a delayed timer that makes sure that all possible 
handlers are polled after an NMI is triggered, but never at a high 
rate.

Then simply return early the moment an NMI handler indicates that 
there was an event handled - and first call high-performance handlers 
like the perf handler.

The proper channel for hardware errors is the #MC entry anyway, so 
this is mostly about legacies and weird hardware.

Thanks,

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ