lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <BANLkTinLvotvcqGU6-OHe2Bj87r-YB_xzA@mail.gmail.com>
Date:	Mon, 13 Jun 2011 09:43:33 -0700
From:	Tony Luck <tony.luck@...el.com>
To:	Borislav Petkov <bp@...64.org>
Cc:	Avi Kivity <avi@...hat.com>, Ingo Molnar <mingo@...e.hu>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"Huang, Ying" <ying.huang@...el.com>,
	Hidetoshi Seto <seto.hidetoshi@...fujitsu.com>
Subject: Re: [PATCH 08/10] NOTIFIER: Take over TIF_MCE_NOTIFY and implement
 task return notifier

On Mon, Jun 13, 2011 at 5:40 AM, Borislav Petkov <bp@...64.org> wrote:
> Well, in the ActionRequired case, the error is obviously reported
> through a #MC exception meaning that the core definitely generates the
> MCE before we've made a context switch (CR3 change etc) so in that case
> 'current' will point to the task at fault.
>
> The problem is finding which 'current' it is, from all the tasks running
> on all cores when the #MC is raised. Tony, can you tell from the hw
> which core actually caused the MCE? Is it the monarch, so to speak?

We can tell which cpu hit the problem by looking at MCG_STATUS register.
All the innocent bystanders who were dragged into this machine check will
have the RIPV bit set and EIPV bit clear (SDM Vol 3A, Table 15-20 in section
15.3.9.2).  With my patch to re-order processing this will usually be the
monarch (though it might not be if more that one cpu has RIPV==0).

> I'm thinking that in cases were we have a page shared by multiple
> processes, we still would want to run a 'main' user return notifier on
> one core which does the rmap lookup _but_, _also_ very importantly, the
> other cores still hold off from executing userspace until that main
> notifier hasn't finished finding out how big the fallout is,i.e. how
> many other processes would run into the same page.

I don't think that we have any hope of fixing the "multiple processes
about to hit the same page" problem. We can't track down all the users
from the MC handler (because the data structures may be in the process
of being changed by some threads that we in the kernel at the time of the
machine check).  Our only hope to solve this would be to let all the kernel
threads return from the MC handler - with a self-IRQ to grab them back
into our clutches when they get out of any critical sections.

But that sounds like something to defer to "phase N+1" after we have
solved all the easier cases.

-Tony
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ