[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150506171428.GB17718@potion.brq.redhat.com>
Date: Wed, 6 May 2015 19:14:32 +0200
From: Radim Krčmář <rkrcmar@...hat.com>
To: Paolo Bonzini <pbonzini@...hat.com>
Cc: linux-kernel@...r.kernel.org, kvm@...r.kernel.org, bsd@...hat.com,
guangrong.xiao@...ux.intel.com,
Yang Zhang <yang.z.zhang@...el.com>, wanpeng.li@...ux.intel.com
Subject: Re: [RFC PATCH 00/13] KVM: x86: SMM support
2015-05-06 13:18+0200, Paolo Bonzini:
> On 05/05/2015 20:40, Radim Krčmář wrote:
> > - Whole SMRAM is writeable. Spec says that parts of state should be
> > read-only. (This seems hard to fix without trapping all writes.)
>
> Read-only here just means that you shouldn't touch it. It says "Some
> register images are read-only, and must not be modified (modifying these
> registers will result in unpredictable behavior)".
I haven't seen the note that they musn't be modified, sorry.
> But actually the behavior is very predictable, and can be very fun. You
> can do stuff such as interrupting a VM86 task with an SMI, and prepare
> an SMM handler that returns to VM86 with CPL=0 (by setting SS.DPL=0 in
> the SS access rights field). That's very illegal compared to big real
> mode. :)
>
> Or you can fake a processor reset straight after RSM, which includes
> setting the right segment base, limit and access rights (again you need
> to set SS.DPL=0 to affect the CPL).
>
> Worst case, you get a failed VM entry (e.g. if you set up an invalid
> combination of segment limit and segment G flag). If you care, disable
> unrestricted_guest. :)
Nice, thanks.
> > - I/O restarting is not enabled. (APM 2:10.2.4 SMM-Revision Identifier
> > says that AMD64 always sets this bit.)
>
> Yes, unfortunately if I do enable it SeaBIOS breaks. So it's left for
> later.
>
> I/O restarting is meant for stuff like emulating the i8042 on top of a
> USB keyboard. We luckily don't care (do not get strange ideas about
> reducing the QEMU attack surface).
Ok. (SMM handlers doing sanity checks on their environment are probably
the biggest obstacle.)
> > - SMM and userspace.
> > We can get if smm is enabled at two separate places (flag from KVM_RUN
> > and in KVM_GET_VCPU_EVENTS) and toggle it via KVM_SET_VCPU_EVENTS.
> >
> > It's not an event, so I wouldn't include it in EVENTS API ...
>
> Well, neither is nmi.masked or interrupt.shadow. In the end, smi.smm is
> just "smi.masked" (except that it also doubles as "is RSM allowed/is
> SMRAM accessible").
Yeah, that double function is bugging me ... SMI can be masked for
reasons other than being in SMM, so the connection is not obvious.
(But all cases I know of are handled differently in KVM.)
Other case is that when emulating the SMM switch in userspace, EVENTS
ioctl wouldn't be the place where I where I would expect a toggle for
KVM to be.
> > Letting the flag in KVM_RUN also toggle SMM would be easiest.
>
> I'm worried about breaking userspace with that. I would probably have
> to enable the SMM capability manually.
>
> By comparison, the current implementation is entirely transparent as
> long as the guest only generates SMIs through the APIC: all QEMU changes
> are needed to support SMRAM and generation of SMIs through port 0xB2,
> but the feature otherwise has zero impact on userspace.
They should be equally transparent. Userspace needs to preserve all
reserved bits, and hopefully does. (It's the same with SET_EVENTS.)
> But the main point in favor of "smi.smm" IMO is that it doubles as
> "smi.masked".
True. 'smi.masked_as_we_are_in_smm' :)
> > Otherwise, wouldn't GET/SET_ONE_REG be a better match for it?
>
> Perhaps, but then smi.pending would still be a better match for
> KVM_GET_VCPU_EVENTS than for ONE_REG. (And again, so would
> "smi.masked"---it just happens that "masked SMIs == CPU in SMM").
smi.pending makes sense in events, it would be split ...
Your original solution is a good one. (Others aren't any better.)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists