[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <555E4C4E.1010603@redhat.com>
Date: Thu, 21 May 2015 23:21:18 +0200
From: Paolo Bonzini <pbonzini@...hat.com>
To: Radim Krčmář <rkrcmar@...hat.com>
CC: linux-kernel@...r.kernel.org, kvm@...r.kernel.org, bsd@...hat.com
Subject: Re: [PATCH 08/12] KVM: x86: save/load state on SMM switch
On 21/05/2015 19:00, Radim Krčmář wrote:
> Potentially, an NMI could be latched (while in SMM or upon exit) and
> serviced upon exit [...]
>
> This "Potentially" could be in the sense that the whole 3rd paragraph is
> only applicable to some ancient SMM design :)
It could also be in the sense that you cannot exclude an NMI coming at
exactly the wrong time.
If you want to go full language lawyer, it does mention it whenever
behavior is specific to a processor family.
> The 1st paragraph has quite clear sentence:
>
> If NMIs were blocked before the SMI occurred, they are blocked after
> execution of RSM.
>
> so I'd just ignore the 3rd paragraph ...
>
> And the APM 2:10.3.3 Exceptions and Interrupts
> NMI—If an NMI occurs while the processor is in SMM, it is latched by
> the processor, but the NMI handler is not invoked until the processor
> leaves SMM with the execution of an RSM instruction. A pending NMI
> causes the handler to be invoked immediately after the RSM completes
> and before the first instruction in the interrupted program is
> executed.
>
> An SMM handler can unmask NMI interrupts by simply executing an IRET.
> Upon completion of the IRET instruction, the processor recognizes the
> pending NMI, and transfers control to the NMI handler. Once an NMI is
> recognized within SMM using this technique, subsequent NMIs are
> recognized until SMM is exited. Later SMIs cause NMIs to be masked,
> until the SMM handler unmasks them.
>
> makes me think that we should unmask them unconditionally or that SMM
> doesn't do anything with NMI masking.
Actually I hadn't noticed this paragraph. But I read it the same as the
Intel manual (i.e. what I implemented): it doesn't say anywhere that RSM
may cause the processor to *set* the "NMIs masked" flag.
It makes no sense; as you said it's 1 bit of state! But it seems that
it's the architectural behavior. :(
> If we can choose, less NMI nesting seems like a good idea.
It would---I'm just preempting future patches from Nadav. :) That said,
even if OVMF does do IRETs in SMM (in 64-bit mode it fills in page
tables lazily for memory above 4GB), we do not care about asynchronous
SMIs such as those for power management. So we should never enter SMM
with NMIs masked, to begin with.
Paolo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists