[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <fdf41507948023b600ef7febfd27951d8435a7dd.camel@kernel.org>
Date: Tue, 14 Sep 2021 00:12:36 +0300
From: Jarkko Sakkinen <jarkko@...nel.org>
To: Dave Hansen <dave.hansen@...el.com>,
Paolo Bonzini <pbonzini@...hat.com>,
linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Cc: x86@...nel.org, linux-sgx@...r.kernel.org,
dave.hansen@...ux.intel.com, yang.zhong@...el.com
Subject: Re: [PATCH 1/2] x86: sgx_vepc: extract sgx_vepc_remove_page
On Mon, 2021-09-13 at 07:55 -0700, Dave Hansen wrote:
> On 9/13/21 7:24 AM, Paolo Bonzini wrote:
> > > How does this end up happening in the first place?
> > >
> > > All enclave pages should start out on 'sgx_dirty_page_list' and
> > > ksgxd sanitizes them with EREMOVE before making them available. That
> > > should cover EREMOVE after reboots while SGX pages are initialized,
> > > including kexec().
> >
> > By "Windows startup" I mean even after guest reboot. Because another
> > process could sneak in and steal your EPC pages between a close() and an
> > open(), I'd like to have a way to EREMOVE the pages while keeping them
> > assigned to the specific vEPC instance, i.e. *without* going through
> > sgx_vepc_free_page().
>
> Oh, so you want fresh EPC state for the guest, but you're concerned that
> the previous guest might have left them in a bad state. The current
> method of getting a new vepc instance (which guarantees fresh state) has
> some other downsides.
>
> Can't another process steal pages via sgxd and reclaim at any time?
> What's the extra concern here about going through a close()/open()
> cycle? Performance?
sgxd does not steal anything from vepc regions.
They are not part of the reclaiming process.
/Jarkko
Powered by blists - more mailing lists