lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1547158922.20396.13.camel@intel.com>
Date:   Thu, 10 Jan 2019 22:22:05 +0000
From:   "Huang, Kai" <kai.huang@...el.com>
To:     "Christopherson, Sean J" <sean.j.christopherson@...el.com>,
        "luto@...nel.org" <luto@...nel.org>
CC:     "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "jarkko.sakkinen@...ux.intel.com" <jarkko.sakkinen@...ux.intel.com>,
        "peterz@...radead.org" <peterz@...radead.org>,
        "josh@...htriplett.org" <josh@...htriplett.org>,
        "tglx@...utronix.de" <tglx@...utronix.de>,
        "dave.hansen@...ux.intel.com" <dave.hansen@...ux.intel.com>,
        "haitao.huang@...ux.intel.com" <haitao.huang@...ux.intel.com>,
        "greg@...ellic.com" <greg@...ellic.com>,
        "x86@...nel.org" <x86@...nel.org>, "hpa@...or.com" <hpa@...or.com>,
        "mingo@...hat.com" <mingo@...hat.com>,
        "linux-sgx@...r.kernel.org" <linux-sgx@...r.kernel.org>,
        "bp@...en8.de" <bp@...en8.de>,
        "jethro@...tanix.com" <jethro@...tanix.com>
Subject: Re: x86/sgx: uapi change proposal

On Thu, 2019-01-10 at 13:34 -0800, Andy Lutomirski wrote:
> > > On Jan 9, 2019, at 8:31 AM, Sean Christopherson <sean.j.christopherson@...el.com> wrote:
> > > 
> > > On Tue, Jan 08, 2019 at 02:54:11PM -0800, Andy Lutomirski wrote:
> > > On Tue, Jan 8, 2019 at 2:09 PM Sean Christopherson
> > > <sean.j.christopherson@...el.com> wrote:
> > > > 
> > > > Cleaner in the sense that it's faster to get basic support up and running
> > > > since there are fewer touchpoints, but there are long term ramifications
> > > > to cramming EPC management in KVM.
> > > > 
> > > > And at this point I'm not stating any absolutes, e.g. how EPC will be
> > > > handled by KVM.  What I'm pushing for is to not eliminate the possibility
> > > > of having the SGX subsystem own all EPC management, e.g. don't tie
> > > > /dev/sgx to a single enclave.
> > > 
> > > I haven't gone and re-read all the relevant SDM bits, so I'll just
> > > ask: what, if anything, are the actual semantics of mapping "raw EPC"
> > > like this?  You can't actually do anything with the mapping from user
> > > mode unless you actually get an enclave created and initialized in it
> > > and have it mapped at the correct linear address, right?  I still
> > > think you have the right idea, but it is a bit unusual.
> > 
> > Correct, the EPC is inaccessible until a range is "mapped" with ECREATE.
> > But I'd argue that it's not unusual, just different.  And really it's not
> > all that different than userspace mmap'ing /dev/sgx/enclave prior to
> > ioctl(ENCLAVE_CREATE).  In that case, userspace can still (attempt to)
> > access the "raw" EPC, i.e. generate a #PF, the kernel/driver just happens
> > to consider any faulting EPC address without an associated enclave as
> > illegal, e.g. signals SIGBUS.
> > 
> > The /dev/sgx/epc case simply has different semantics for moving pages in
> > and out of the EPC, i.e. different fault and eviction semantics.  Yes,
> > this allows the guest kernel to directly access the "raw" EPC, but that's
> > conceptually in line with hardware where priveleged software can directly
> > "access" the EPC (or rather, the abort page for all intents and purposes).
> > I.e. it's an argument for requiring certain privileges to open /dev/sgx/epc,
> > but IMO it's not unusual.
> > 
> > Maybe /dev/sgx/epc is a poor name and is causing confusion, e.g.
> > /dev/sgx/virtualmachine might be more appropriate.
> > 
> > > I do think it makes sense to have QEMU delegate the various ENCLS
> > > operations (especially EINIT) to the regular SGX interface, which will
> > > mean that VM guests will have exactly the same access controls applied
> > > as regular user programs, which is probably what we want.
> > 
> > To what end?  Except for EINIT, none of the ENCLS leafs are interesting
> > from a permissions perspective.  Trapping and re-executing ENCLS leafs
> > is painful, e.g. most leafs have multiple virtual addresses that need to
> > be translated.  And routing everything through the regular interface
> > would make SGX even slower than it already is, e.g. every ENCLS would
> > take an additional ~900 cycles just to handle the VM-Exit, and that's
> > not accounting for any additional overhead in the SGX code, e.g. using
> > the regular interface would mean superfluous locks, etc...
> 
> Trapping EINIT is what I have in mind.
> 
> > 
> > Couldn't we require the same privilege/capability for VMs and and EINIT
> > tokens?  I.e. /dev/sgx/virtualmachine can only be opened by a user that
> > can also generate tokens.
> 
> Hmm, maybe.  Or we can use Jarkko’s securityfs attribute thingy.
> 
> Concretely, I think there are two things we care about:
> 
> First, if the host enforces some policy as to which enclaves can
> launch, then it should apply the same policy to guests — otherwise KVM
> lets programs do an end run around the policy. So, in the initial
> incarnation of this, QEMU should probably have to open the provision
> attribute fd if it wants its guest to be able to EINIT a provisioning
> enclave.  When someone inevitably adds an EINIT LSM hook, the KVM
> interface should also call it.
> 
> Second, the normal enclave interface won't allow user code to supply
> an EINITTOKEN, so the KVM interface will presumably need to be
> different, unless we're going to emulate EINIT by ignoring the token.
> That seems like a very strange thing to do.

Hi Andy,

IMHO applying policy to enclave in VM should be different to applying policy to enclave in host. SGX
sw stack in host should be able to run inside VM without any modification, so for example, if host
sets policy that we cannot run LE (except LE in host), then basically we are disabling SGX in VM. In
general KVM SGX is supposed to run all guest OSes with SGX. And for provisioning enclave, do you see
any reason that we need to disallow to run it inside VM?

Maybe some more general questions: What policy/policies should we have in host? Should they in core-
SGX code, or should they belong to SGX driver's scope? Do we need to figure out all of them and how
to control before we can actually think about upstreaming virtualization support?

Thanks,
-Kai

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ