[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aAo_2MPGOkOciNuM@google.com>
Date: Thu, 24 Apr 2025 06:42:48 -0700
From: Sean Christopherson <seanjc@...gle.com>
To: Elena Reshetova <elena.reshetova@...el.com>
Cc: Kai Huang <kai.huang@...el.com>, Dave Hansen <dave.hansen@...el.com>,
"linux-sgx@...r.kernel.org" <linux-sgx@...r.kernel.org>,
Vincent R Scarlata <vincent.r.scarlata@...el.com>, "x86@...nel.org" <x86@...nel.org>,
"jarkko@...nel.org" <jarkko@...nel.org>, Vishal Annapurve <vannapurve@...gle.com>, Chong Cai <chongc@...gle.com>,
Asit K Mallick <asit.k.mallick@...el.com>, Erdem Aktas <erdemaktas@...gle.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>, "bondarn@...gle.com" <bondarn@...gle.com>,
"dionnaglaze@...gle.com" <dionnaglaze@...gle.com>, Scott Raynor <scott.raynor@...el.com>
Subject: Re: [PATCH v3 2/2] x86/sgx: Implement EUPDATESVN and
opportunistically call it during first EPC page alloc
On Thu, Apr 24, 2025, Elena Reshetova wrote:
> > On Tue, Apr 22, 2025, Kai Huang wrote:
> > > On Fri, 2025-04-18 at 07:55 -0700, Sean Christopherson wrote:
> > > > On Tue, Apr 15, 2025, Elena Reshetova wrote:
> > > > That said, handling this deep in the bowels of EPC page allocation seems
> > > > unnecessary. The only way for there to be no active EPC pages is if
> > > > there are no enclaves. As above, virtual EPC usage is already all but
> > > > guaranteed to hit false positives, so I don't think there's anything
> > > > gained by trying to update the SVN based on EPC allocations.
> > > >
> > > > So rather than react to EPC allocations, why not hook sgx_open() and
> > sgx_vepc_open()?
> > >
> > > The only thing I don't like about this is we need to hook both of them.
> >
> > And having to maintain a separate counter.
...
> If we follow the approach of trying to execute EUPDATESVN via
> sgx_open() and sgx_vepc_open() paths, it adds more complexity to kernel
> code
This is where I disagree. I don't see how it's more complex even on the surface,
and when you start considering the implications of "randomly" inserting a non-
trivial operation into EPC allocation, IMO it's far less complex overall.
> and imo it still doesn’t remove the complexity from userspace
> orchestration sw. I.e. userspace still has to get rid of host enclaves and
> SGX enabled VMs (because syncing with VMs owners to make sure their
> encalves are destroyed and these VMs are ready for EUDPATESVN seems
> like a big organizational complexity and error prone).
Yeah, I don't see a way around that.
> So, the only thing this approach would address would be an EPC
> pre-allocation done by qemu? Wouldn't it be more reasonable
> (here I am purely speculating, I dont know qemu code) to implement
> in qemu the code to drop EPC pre-allocation if no VMs with SGX are
> running? That would be a good overall policy imo not to waste EPC
> space when not needed in practice.
QEMU only preallocates when the VM is being created, i.e. QEMU doesn't maintain
a persistent pool. All I was saying is that userspace needs to shut down SGX
capable VMs, even if the VM isn't actively running enclaves, which is a shame.
Untested sketch of hooking create/delete to do SVN updates.
---
arch/x86/kernel/cpu/sgx/driver.c | 4 ++++
arch/x86/kernel/cpu/sgx/encl.c | 2 ++
arch/x86/kernel/cpu/sgx/main.c | 34 ++++++++++++++++++++++++++++++++
arch/x86/kernel/cpu/sgx/sgx.h | 3 +++
arch/x86/kernel/cpu/sgx/virt.c | 6 ++++++
5 files changed, 49 insertions(+)
diff --git a/arch/x86/kernel/cpu/sgx/driver.c b/arch/x86/kernel/cpu/sgx/driver.c
index 7f8d1e11dbee..669e44d61f9f 100644
--- a/arch/x86/kernel/cpu/sgx/driver.c
+++ b/arch/x86/kernel/cpu/sgx/driver.c
@@ -19,6 +19,10 @@ static int sgx_open(struct inode *inode, struct file *file)
struct sgx_encl *encl;
int ret;
+ ret = sgx_inc_usage_count();
+ if (ret)
+ return ret;
+
encl = kzalloc(sizeof(*encl), GFP_KERNEL);
if (!encl)
return -ENOMEM;
diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c
index 279148e72459..84ca78627e55 100644
--- a/arch/x86/kernel/cpu/sgx/encl.c
+++ b/arch/x86/kernel/cpu/sgx/encl.c
@@ -765,6 +765,8 @@ void sgx_encl_release(struct kref *ref)
WARN_ON_ONCE(encl->secs.epc_page);
kfree(encl);
+
+ sgx_dec_usage_count();
}
/*
diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c
index 8ce352fc72ac..ca74c91d4291 100644
--- a/arch/x86/kernel/cpu/sgx/main.c
+++ b/arch/x86/kernel/cpu/sgx/main.c
@@ -914,6 +914,40 @@ int sgx_set_attribute(unsigned long *allowed_attributes,
}
EXPORT_SYMBOL_GPL(sgx_set_attribute);
+static atomic_t sgx_usage_count;
+static DEFINE_MUTEX(sgx_svn_lock);
+
+static int sgx_update_svn(void)
+{
+ // blah blah blah
+}
+
+int sgx_inc_usage_count(void)
+{
+ if (atomic_inc_not_zero(&sgx_usage_count))
+ return 0;
+
+ guard(mutex)(&sgx_svn_lock);
+
+ if (atomic_inc_not_zero(&sgx_usage_count))
+ return 0;
+
+ return sgx_update_svn();
+}
+
+void sgx_dec_usage_count(void)
+{
+ if (atomic_dec_return(&sgx_usage_count))
+ return;
+
+ guard(mutex)(&sgx_svn_lock);
+
+ if (atomic_read(&sgx_usage_count))
+ return;
+
+ sgx_update_svn();
+}
+
static int __init sgx_init(void)
{
int ret;
diff --git a/arch/x86/kernel/cpu/sgx/sgx.h b/arch/x86/kernel/cpu/sgx/sgx.h
index d2dad21259a8..f5940393d9bd 100644
--- a/arch/x86/kernel/cpu/sgx/sgx.h
+++ b/arch/x86/kernel/cpu/sgx/sgx.h
@@ -102,6 +102,9 @@ static inline int __init sgx_vepc_init(void)
}
#endif
+int sgx_inc_usage_count(void);
+void sgx_dec_usage_count(void);
+
void sgx_update_lepubkeyhash(u64 *lepubkeyhash);
#endif /* _X86_SGX_H */
diff --git a/arch/x86/kernel/cpu/sgx/virt.c b/arch/x86/kernel/cpu/sgx/virt.c
index 7aaa3652e31d..e548de340c2e 100644
--- a/arch/x86/kernel/cpu/sgx/virt.c
+++ b/arch/x86/kernel/cpu/sgx/virt.c
@@ -255,12 +255,18 @@ static int sgx_vepc_release(struct inode *inode, struct file *file)
xa_destroy(&vepc->page_array);
kfree(vepc);
+ sgx_dec_usage_count();
return 0;
}
static int sgx_vepc_open(struct inode *inode, struct file *file)
{
struct sgx_vepc *vepc;
+ int ret;
+
+ ret = sgx_inc_usage_count();
+ if (ret)
+ return ret;
vepc = kzalloc(sizeof(struct sgx_vepc), GFP_KERNEL);
if (!vepc)
base-commit: 37374f38f6c5291ae66ce05f9435c3b519b6f234
--
Powered by blists - more mailing lists