lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150521162036.GA31183@potion.brq.redhat.com>
Date:	Thu, 21 May 2015 18:20:36 +0200
From:	Radim Krčmář <rkrcmar@...hat.com>
To:	Paolo Bonzini <pbonzini@...hat.com>
Cc:	linux-kernel@...r.kernel.org, kvm@...r.kernel.org, bsd@...hat.com
Subject: Re: [PATCH 08/12] KVM: x86: save/load state on SMM switch

2015-05-08 13:20+0200, Paolo Bonzini:
> The big ugly one.  This patch adds support for switching in and out of
> system management mode, respectively upon receiving KVM_REQ_SMI and upon
> executing a RSM instruction.  Both 32- and 64-bit formats are supported
> for the SMM state save area.
> 
> Signed-off-by: Paolo Bonzini <pbonzini@...hat.com>
> ---
> 	RFC->v1: shift access rights left by 8 for 32-bit format
> 		 move tracepoint to kvm_set_hflags
> 		 fix NMI handling
> ---
> diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
> @@ -2262,12 +2262,258 @@ static int em_lseg(struct x86_emulate_ctxt *ctxt)
> +static int rsm_load_seg_32(struct x86_emulate_ctxt *ctxt, u64 smbase, int n)
> +{
> +	struct desc_struct desc;
> +	int offset;
> +	u16 selector;
> +
> +	selector = get_smstate(u32, smbase, 0x7fa8 + n * 4);

(u16, SDM says that most significant 2 bytes are reserved anyway.)

> +	if (n < 3)
> +		offset = 0x7f84 + n * 12;
> +	else
> +		offset = 0x7f2c + (n - 3) * 12;

These numbers made me look where the hell is that defined and the
easiest reference seemed to be http://www.sandpile.org/x86/smm.htm,
which has several layouts ... I hopefully checked the intersection of
various Intels and AMDs.

> +	set_desc_base(&desc,      get_smstate(u32, smbase, offset + 8));
> +	set_desc_limit(&desc,     get_smstate(u32, smbase, offset + 4));
> +	rsm_set_desc_flags(&desc, get_smstate(u32, smbase, offset));

(There wan't a layout where this would be right, so we could save the
 shifting of those flags in 64 bit mode.  Intel P6 was close, and they
 had only 2 bytes for access right, which means they weren't shifted.)

> +static int rsm_load_state_32(struct x86_emulate_ctxt *ctxt, u64 smbase)
> +{
> +	cr0 =                      get_smstate(u32, smbase, 0x7ffc);

(I wonder why they made 'smbase + 0x8000' the default offset in SDM,
 when 'smbase + 0xfe00' or 'smbase' would work as well.)

> +static int rsm_load_state_64(struct x86_emulate_ctxt *ctxt, u64 smbase)
> +{
> +	struct desc_struct desc;
> +	u16 selector;
> +	selector =                  get_smstate(u32, smbase, 0x7e90);
> +	rsm_set_desc_flags(&desc,   get_smstate(u32, smbase, 0x7e92) << 8);

(Both reads should be u16.  Luckily, extra data gets ignored.)

>  static int em_rsm(struct x86_emulate_ctxt *ctxt)
>  {
> +	if ((ctxt->emul_flags & X86EMUL_SMM_INSIDE_NMI_MASK) == 0)
> +		ctxt->ops->set_nmi_mask(ctxt, false);

NMI is always fun ... let's see two cases:
1. NMI -> SMI -> RSM -> NMI
NMI is not injected;  ok.

2. NMI -> SMI -> IRET -> RSM -> NMI
NMI is injected;  I think it shouldn't be ... have you based this
behavior on the 3rd paragraph of SDM 34.8 NMI HANDLING WHILE IN SMM
("A special case [...]")?

Why I think we should restore NMI mask on RSM:
- It's consistent with SMI -> IRET -> NMI -> RSM -> NMI (where we,
  I think correctly, unmask NMIs) and the idea that SMM tries to be to
  transparent (but maybe they didn't care about retarded SMI handlers).
- APM 2:15.30.3 SMM_CTL MSR (C001_0116h)
  • ENTER—Bit 1. Enter SMM: map the SMRAM memory areas, record whether
    NMI was currently blocked and block further NMI and SMI interrupts.
  • EXIT—Bit 3. Exit SMM: unmap the SMRAM memory areas, restore the
    previous masking status of NMI and unconditionally reenable SMI.
  
  The MSR should mimic real SMM signals and does restore the NMI mask.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ