[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <37306EFA9975BE469F115FDE982C075BC6B3B5E6@ORSMSX108.amr.corp.intel.com>
Date: Wed, 13 Dec 2017 23:18:29 +0000
From: "Christopherson, Sean J" <sean.j.christopherson@...el.com>
To: Jarkko Sakkinen <jarkko.sakkinen@...ux.intel.com>
CC: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"intel-sgx-kernel-dev@...ts.01.org"
<intel-sgx-kernel-dev@...ts.01.org>,
"platform-driver-x86@...r.kernel.org"
<platform-driver-x86@...r.kernel.org>
Subject: RE: [intel-sgx-kernel-dev] [PATCH v5 06/11] intel_sgx: driver for
Intel Software Guard Extensions
On Wed, Nov 15, 2017 at 10:20:27AM -0800, Sean Christopherson wrote:
> On Tue, 2017-11-14 at 22:28 +0200, Jarkko Sakkinen wrote:
> > On Tue, Nov 14, 2017 at 09:55:06AM -0800, Sean Christopherson wrote:
> > >
> > > What do you mean by bottlenecks? Assuming you're referring to performance
> > > bottlenecks, this statement is flat out false. Moving the launch enclave
> > > into
> > > the kernel introduces performance bottlenecks, e.g. as implemented, a single
> > > LE
> > > services all EINIT requests and is protected by a mutex. That is the very
> > > definition of a bottleneck.
> > I guess the text does not do a good job describing what I meant. Maybe I
> > should refine it? Your argument about mutex is correct.
> >
> > The use of "bottleneck" does not specifically refer to performance. I'm
> > worried about splitting the tasks needed to launch an enclave between
> > kernel and user space. It could become difficult to manage when more
> > SGX features are added. That is what I was referring when I used the
> > word "bottleneck".
> >
> > I suppose you think I should refine the commit message?
> >
> > About the perf bottleneck. Given that all the data is already in
> > sgx_le_ctx the driver could for example have own LE process for every
> > opened /dev/sgx. Is your comment also suggesting to refine this or
> > could it be postponed?
>
> More that I don't understand why the driver doesn't allow userspace to provide
> an EINIT token, and reciprocally, doesn't provide the token back to userspace.
> IMO, the act of generating an EINIT token is orthogonal to deciding whether or
> not to run the enclave. Running code in a kernel-owned enclave is not specific
> to SGX, e.g. paranoid kernels could run other sensitive tasks in an enclave.
> Being forced to run an enclave to generate an EINIT token is an unfortunate
> speed bump that exists purely because hardware doesn't provide the option to
> disable launch control entirely.
>
> In other words, accepting a token via the IOCTL doesn't mean the driver has to
> use it, e.g. it can always ignore the token, enforce periodic reverification,
> check that the token was created by the driver, etc... And using the token
> doesn't preclude the driver from re-running its verification checks outside of
> the launch enclave.
Resurrecting this thread now that I have a system with launch control
and have been able to measure the performance impact...
Regenerating the EINIT token every time adds somewhere in the vicinity
of ~5% overhead to creating an enclave, versus generating a token once
and reusing it in each EINIT call. This isn't a huge issue since real
world usage models likely won't be re-launching enclaves at a high rate,
but it is measurable.
On top of my other arguments, the key of the token's signer must match
the current value in the LE hash MSRs, so except for future theoretical
scenarios where we want to "revoke" an existing token, the only way we
can end up with a token we don't trust is if the kernel launch enclave
already screwed up or userspace has access to the LE's private key.
Powered by blists - more mailing lists