[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <06db5a41-3485-9141-10b5-56ca57ed1792@intel.com>
Date: Mon, 13 Sep 2021 07:55:14 -0700
From: Dave Hansen <dave.hansen@...el.com>
To: Paolo Bonzini <pbonzini@...hat.com>, linux-kernel@...r.kernel.org,
kvm@...r.kernel.org
Cc: x86@...nel.org, linux-sgx@...r.kernel.org, jarkko@...nel.org,
dave.hansen@...ux.intel.com, yang.zhong@...el.com
Subject: Re: [PATCH 1/2] x86: sgx_vepc: extract sgx_vepc_remove_page
On 9/13/21 7:24 AM, Paolo Bonzini wrote:
>> How does this end up happening in the first place?
>>
>> All enclave pages should start out on 'sgx_dirty_page_list' and
>> ksgxd sanitizes them with EREMOVE before making them available. That
>> should cover EREMOVE after reboots while SGX pages are initialized,
>> including kexec().
>
> By "Windows startup" I mean even after guest reboot. Because another
> process could sneak in and steal your EPC pages between a close() and an
> open(), I'd like to have a way to EREMOVE the pages while keeping them
> assigned to the specific vEPC instance, i.e. *without* going through
> sgx_vepc_free_page().
Oh, so you want fresh EPC state for the guest, but you're concerned that
the previous guest might have left them in a bad state. The current
method of getting a new vepc instance (which guarantees fresh state) has
some other downsides.
Can't another process steal pages via sgxd and reclaim at any time?
What's the extra concern here about going through a close()/open()
cycle? Performance?
Powered by blists - more mailing lists