lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 22 May 2015 16:17:13 +0200
From:	Radim Krčmář <rkrcmar@...hat.com>
To:	Paolo Bonzini <pbonzini@...hat.com>
Cc:	linux-kernel@...r.kernel.org, kvm@...r.kernel.org, bsd@...hat.com
Subject: Re: [PATCH 08/12] KVM: x86: save/load state on SMM switch

2015-05-21 23:21+0200, Paolo Bonzini:
> On 21/05/2015 19:00, Radim Krčmář wrote:
>>   Potentially, an NMI could be latched (while in SMM or upon exit) and
>>   serviced upon exit [...]
>> 
>> This "Potentially" could be in the sense that the whole 3rd paragraph is
>> only applicable to some ancient SMM design :)
> 
> It could also be in the sense that you cannot exclude an NMI coming at
> exactly the wrong time.

Yes, but it is hard to figure out how big the wrong time window is ...

Taken to the extreme, the paragraph says that we must inject NMI that
arrived while in SMM after RSM;  regardless of NMI blocking before.
(Which is not how real hardware works.)

> If you want to go full language lawyer, it does mention it whenever
> behavior is specific to a processor family.

True, I don't know of an exception, but that is not a proof for the
contrary here :/

>> The 1st paragraph has quite clear sentence:
>> 
>>   If NMIs were blocked before the SMI occurred, they are blocked after
>>   execution of RSM.
>> 
>> so I'd just ignore the 3rd paragraph ...

It's suspicious in other ways ... I'll focus on other part of the
sentence now

  Potentially, an NMI could be latched (while in SMM or upon exit)
                               ^^^^^^^^^^^^^^^^^^^^^

A NMI can't be latched in SMM mode and delivered after RSM when we
started with masked NMI.
It was latched in SMM, so we either didn't unmask NMIs or we were
executing a NMI in SMM mode.  The first case is covered by

  If NMIs were blocked before the SMI occurred, they are blocked after
  execution of RSM.

The second case, when we specialize the above, would need to unmask NMIs
with IRET, accept an NMI, and then do RSM before IRET (because IRET
would immediately inject the latched NMI);
if CPU unmasks NMIs in that case, I'd slap someone.

Btw. I had a good laugh on Intel's response to a similar question:
https://software.intel.com/en-us/forums/topic/305672

>> And the APM 2:10.3.3 Exceptions and Interrupts
| [...]
>> makes me think that we should unmask them unconditionally or that SMM
>> doesn't do anything with NMI masking.
> 
> Actually I hadn't noticed this paragraph.  But I read it the same as the
> Intel manual (i.e. what I implemented): it doesn't say anywhere that RSM
> may cause the processor to *set* the "NMIs masked" flag.
> 
> It makes no sense; as you said it's 1 bit of state!  But it seems that
> it's the architectural behavior. :(

Ok, it's sad and I'm too lazy to actually try it ...

>> If we can choose, less NMI nesting seems like a good idea.
> 
> It would---I'm just preempting future patches from Nadav. :)

Me too :D

>                                                               That said,
> even if OVMF does do IRETs in SMM (in 64-bit mode it fills in page
> tables lazily for memory above 4GB), we do not care about asynchronous
> SMIs such as those for power management.  So we should never enter SMM
> with NMIs masked, to begin with.

Yeah, it's a stupid corner case, the place where most of time and sanity
is lost.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ