lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <DM8PR11MB57503621B1D7674404A62004E7A02@DM8PR11MB5750.namprd11.prod.outlook.com>
Date: Fri, 28 Mar 2025 08:07:24 +0000
From: "Reshetova, Elena" <elena.reshetova@...el.com>
To: Jarkko Sakkinen <jarkko@...nel.org>
CC: "Hansen, Dave" <dave.hansen@...el.com>, "linux-sgx@...r.kernel.org"
	<linux-sgx@...r.kernel.org>, "linux-kernel@...r.kernel.org"
	<linux-kernel@...r.kernel.org>, "x86@...nel.org" <x86@...nel.org>, "Mallick,
 Asit K" <asit.k.mallick@...el.com>, "Scarlata, Vincent R"
	<vincent.r.scarlata@...el.com>, "Cai, Chong" <chongc@...gle.com>, "Aktas,
 Erdem" <erdemaktas@...gle.com>, "Annapurve, Vishal" <vannapurve@...gle.com>,
	"dionnaglaze@...gle.com" <dionnaglaze@...gle.com>, "bondarn@...gle.com"
	<bondarn@...gle.com>, "Raynor, Scott" <scott.raynor@...el.com>, "Shutemov,
 Kirill" <kirill.shutemov@...el.com>
Subject: RE: [PATCH 1/4] x86/sgx: Add total number of EPC pages


> oN Thu, Mar 27, 2025 at 03:29:53PM +0000, Reshetova, Elena wrote:
> >
> > > On Mon, Mar 24, 2025 at 12:12:41PM +0000, Reshetova, Elena wrote:
> > > > > On Fri, Mar 21, 2025 at 02:34:40PM +0200, Elena Reshetova wrote:
> > > > > > In order to successfully execute ENCLS[EUPDATESVN], EPC must be
> > > empty.
> > > > > > SGX already has a variable sgx_nr_free_pages that tracks free
> > > > > > EPC pages. Add a new variable, sgx_nr_total_pages, that will keep
> > > > > > track of total number of EPC pages. It will be used in subsequent
> > > > > > patch to change the sgx_nr_free_pages into sgx_nr_used_pages and
> > > > > > allow an easy check for an empty EPC.
> > > > >
> > > > > First off, remove "in subsequent patch".
> > > >
> > > > Ok
> > > >
> > > > >
> > > > > What does "change sgx_nr_free_pages into sgx_nr_used_pages"
> mean?
> > > >
> > > > As you can see from patch 2/4, I had to turn around the meaning of the
> > > > existing sgx_nr_free_pages atomic counter not to count the # of free
> pages
> > > > in EPC, but to count the # of used EPC pages (hence the change of name
> > > > to sgx_nr_used_pages). The reason for doing this is only apparent in
> patch
> > >
> > > Why you *absolutely* need to invert the meaning and cannot make
> > > this work by any means otherwise?
> > >
> > > I doubt highly doubt this could not be done other way around.
> >
> > I can make it work. The point that this way is much better and no damage to
> > existing logic is done. The sgx_nr_free_pages counter that is used only for
> page reclaiming
> > and checked in a single piece of code.
> > To give you an idea the previous iteration of the code looked like below.
> > First, I had to define a new unconditional spinlock to protect the EPC page
> allocation:
> >
> > diff --git a/arch/x86/kernel/cpu/sgx/main.c
> b/arch/x86/kernel/cpu/sgx/main.c
> > index c8a2542140a1..4f445c28929b 100644
> > --- a/arch/x86/kernel/cpu/sgx/main.c
> > +++ b/arch/x86/kernel/cpu/sgx/main.c
> > @@ -31,6 +31,7 @@ static DEFINE_XARRAY(sgx_epc_address_space);
> >   */
> >  static LIST_HEAD(sgx_active_page_list);
> >  static DEFINE_SPINLOCK(sgx_reclaimer_lock);
> > +static DEFINE_SPINLOCK(sgx_allocate_epc_page_lock);
> 
> 
> 
> >
> >  static atomic_long_t sgx_nr_free_pages = ATOMIC_LONG_INIT(0);
> >  static unsigned long sgx_nr_total_pages;
> > @@ -457,7 +458,10 @@ static struct sgx_epc_page
> *__sgx_alloc_epc_page_from_node(int nid)
> >   page->flags = 0;
> >
> >   spin_unlock(&node->lock);
> > +
> > + spin_lock(&sgx_allocate_epc_page_lock);
> >   atomic_long_dec(&sgx_nr_free_pages);
> > + spin_unlock(&sgx_allocate_epc_page_lock);
> >
> >   return page;
> >  }
> >
> > And then also take spinlock every time eupdatesvn attempts to run:
> >
> > int sgx_updatesvn(void)
> > +{
> > + int ret;
> > + int retry = 10;
> 
> Reverse xmas tree order.
> 
> > +
> > + spin_lock(&sgx_allocate_epc_page_lock);
> 
> You could use guard for this.
> 
> https://elixir.bootlin.com/linux/v6.13.7/source/include/linux/cleanup.h
> 
> > +
> > + if (atomic_long_read(&sgx_nr_free_pages) != sgx_nr_total_pages) {
> > + spin_unlock(&sgx_allocate_epc_page_lock);
> > + return SGX_EPC_NOT_READY;
> 
> Don't use uarch error codes.

Sure, thanks, I can fix all of the above, this was just to give an idea how
the other version of the code would look like. 

> 
> > + }
> > +
> > + do {
> > + ret = __eupdatesvn();
> > + if (ret != SGX_INSUFFICIENT_ENTROPY)
> > + break;
> > +
> > + } while (--retry);
> > +
> > + spin_unlock(&sgx_allocate_epc_page_lock);
> >
> > Which was called from each enclave create ioctl:
> >
> > @@ -163,6 +163,11 @@ static long sgx_ioc_enclave_create(struct sgx_encl
> *encl, void __user *arg)
> >   if (copy_from_user(&create_arg, arg, sizeof(create_arg)))
> >   return -EFAULT;
> >
> > + /* Unless running in a VM, execute EUPDATESVN if instruction is avalible */
> > + if ((cpuid_eax(SGX_CPUID) & SGX_CPUID_EUPDATESVN) &&
> > +    !boot_cpu_has(X86_FEATURE_HYPERVISOR))
> > + sgx_updatesvn();
> > +
> >   secs = kmalloc(PAGE_SIZE, GFP_KERNEL);
> >   if (!secs)
> >   return -ENOMEM;
> >
> > Would you agree that this way it is much worse even code/logic-wise even
> without benchmarks?
> 
> Yes but obviously I cannot promise that I'll accept this as it is
> until I see the final version

Are you saying you prefer *this version with spinlock* vs. 
simpler version that utilizes the fact that sgx_nr_free_pages is changed
into tracking of number of used pages? 

> 
> Also you probably should use mutex given the loop where we cannot
> temporarily exit the lock (like e.g. in keyrings gc we can).

Not sure I understand this, could you please elaborate why do I need an
additional mutex here? Or are you suggesting switching spinlock to mutex? 

Best Regards,
Elena.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ