lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 19 May 2015 14:25:11 +0000
From:	"Zhang, Yang Z" <yang.z.zhang@...el.com>
To:	Paolo Bonzini <pbonzini@...hat.com>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"kvm@...r.kernel.org" <kvm@...r.kernel.org>
CC:	"rkrcmar@...hat.com" <rkrcmar@...hat.com>,
	"bsd@...hat.com" <bsd@...hat.com>,
	"guangrong.xiao@...ux.intel.com" <guangrong.xiao@...ux.intel.com>,
	"wanpeng.li@...ux.intel.com" <wanpeng.li@...ux.intel.com>
Subject: RE: [RFC PATCH 00/13] KVM: x86: SMM support

Paolo Bonzini wrote on 2015-04-30:
> This patch series introduces system management mode support.

Just curious what's motivation to add vSMM supporting? Is there any usage case inside guest requires SMM? Thanks.

> There is still some work to do, namely: test without unrestricted
> guest support, test on AMD, disable the capability if !unrestricted
> guest and !emulate invalid guest state(*), test with a QEMU that
> understand KVM_MEM_X86_SMRAM, actually post QEMU patches that let you use this.
> 
> 	(*) newer chipsets moved away from legacy SMRAM at 0xa0000,
> 	    thus support for real mode CS base above 1M is necessary
> 
> Because legacy SMRAM is a mess, I have tried these patches with Q35's
> high SMRAM (at 0xfeda0000).  This means that right now this isn't the
> easiest thing to test; you need QEMU patches that add support for high
> SMRAM, and SeaBIOS patches to use high SMRAM.  Until QEMU support for
> KVM_MEM_X86_SMRAM is in place, also, I'm keeping SMRAM open in SeaBIOS.
> 
> That said, even this clumsy and incomplete userspace configuration is
> enough to test all patches except 11 and 12.
> 
> The series is structured as follows.
> 
> Patch 1 is an unrelated bugfix (I think).  Patches 2 to 6 extend some
> infrastructure functions.  Patches 1 to 4 could be committed right now.
> 
> Patches 7 to 9 implement basic support for SMM in the KVM API and
> teach KVM about doing the world switch on SMI and RSM.
> 
> Patch 10 touches all places in KVM that read/write guest memory to go
> through an x86-specific function.  The x86-specific function takes a
> VCPU rather than a struct kvm.  This is used in patches 11 and 12 to
> limits access to specially marked SMRAM slots unless the VCPU is in
> system management mode.
> 
> Finally, patch 13 exposes the new capability for userspace to probe.
> 


Best regards,
Yang


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ