[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180831181509.GB21555@linux.intel.com>
Date: Fri, 31 Aug 2018 11:15:09 -0700
From: Sean Christopherson <sean.j.christopherson@...el.com>
To: Jarkko Sakkinen <jarkko.sakkinen@...ux.intel.com>
Cc: "Huang, Kai" <kai.huang@...el.com>,
"platform-driver-x86@...r.kernel.org"
<platform-driver-x86@...r.kernel.org>,
"x86@...nel.org" <x86@...nel.org>,
"nhorman@...hat.com" <nhorman@...hat.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"tglx@...utronix.de" <tglx@...utronix.de>,
"suresh.b.siddha@...el.com" <suresh.b.siddha@...el.com>,
"Ayoun, Serge" <serge.ayoun@...el.com>,
"hpa@...or.com" <hpa@...or.com>,
"npmccallum@...hat.com" <npmccallum@...hat.com>,
"mingo@...hat.com" <mingo@...hat.com>,
"linux-sgx@...r.kernel.org" <linux-sgx@...r.kernel.org>,
"Hansen, Dave" <dave.hansen@...el.com>
Subject: Re: [PATCH v13 10/13] x86/sgx: Add sgx_einit() for initializing
enclaves
On Fri, Aug 31, 2018 at 03:17:03PM +0300, Jarkko Sakkinen wrote:
> On Wed, Aug 29, 2018 at 07:33:54AM +0000, Huang, Kai wrote:
> > [snip..]
> >
> > > > >
> > > > > @@ -38,6 +39,18 @@ static LIST_HEAD(sgx_active_page_list); static
> > > > > DEFINE_SPINLOCK(sgx_active_page_list_lock);
> > > > > static struct task_struct *ksgxswapd_tsk; static
> > > > > DECLARE_WAIT_QUEUE_HEAD(ksgxswapd_waitq);
> > > > > +static struct notifier_block sgx_pm_notifier; static u64
> > > > > +sgx_pm_cnt;
> > > > > +
> > > > > +/* The cache for the last known values of IA32_SGXLEPUBKEYHASHx
> > > > > +MSRs
> > > > > for each
> > > > > + * CPU. The entries are initialized when they are first used by
> > > > > sgx_einit().
> > > > > + */
> > > > > +struct sgx_lepubkeyhash {
> > > > > + u64 msrs[4];
> > > > > + u64 pm_cnt;
> > > >
> > > > May I ask why do we need pm_cnt here? In fact why do we need suspend
> > > > staff (namely, sgx_pm_cnt above, and related code in this patch) here
> > > > in this patch? From the patch commit message I don't see why we need
> > > > PM staff here. Please give comment why you need PM staff, or you may
> > > > consider to split the PM staff to another patch.
> > >
> > > Refining the commit message probably makes more sense because without PM
> > > code sgx_einit() would be broken. The MSRs have been reset after waking up.
> > >
> > > Some kind of counter is required to keep track of the power cycle. When going
> > > to sleep the sgx_pm_cnt is increased. sgx_einit() compares the current value of
> > > the global count to the value in the cache entry to see whether we are in a new
> > > power cycle.
> >
> > You mean reset to Intel default? I think we can also just reset the
> > cached MSR values on each power cycle, which would be simpler, IMHO?
>
> I don't really see that much difference in the complexity.
Tracking the validity of the cache means we're hosed if we miss any
condition that causes the MSRs to be reset. I think we're better off
assuming the cache can be stale at any time, i.e. don't track power
cyles and instead handle EINIT failure due to INVALID_TOKEN by writing
the cache+MSRs with the desired hash and retrying EINIT. EINIT is
interruptible and its latency is extremely variable in any case, e.g.
tens of thousands of cycles, so this rarely-hit "slow path" probably
wouldn't affect the worst case latency of EINIT.
> > I think we definitely need some code to handle S3-S5, but should be in
> > separate patches, since I think the major impact of S3-S5 is entire
> > EPC being destroyed. I think keeping pm_cnt is not sufficient enough
> > to handle such case?
>
> The driver has SGX_POWER_LOST_ENCLAVE for ioctls and it deletes the TCS
> entries.
>
> > > This brings up one question though: how do we deal with VM host going to sleep?
> > > VM guest would not be aware of this.
> >
> > IMO VM just gets "sudden loss of EPC" after suspend & resume in host.
> > SGX driver and SDK should be able to handle "sudden loss of EPC", ie,
> > co-working together to re-establish the missing enclaves.
>
> This is not about EPC. It is already dealt by the driver. I'm concerned
> about the MSR cache as it would mess up.
>
> But I guess this logic is part of the KVM code anyway now that I think
> more of it.
Ya, I expect that VMMs will preserve VM's virtual pubkey MSRs, though
there might be scenarios where a VM has direct access to the hardware
MSRs. This is probably a moot point since I don't think we want to
assume the kernel's cache is 100% accurate, regardless of environment.
Powered by blists - more mailing lists