[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <886a50c8e97b5d2ef5b7c004a63365d2aa480f33.camel@linux.intel.com>
Date: Fri, 14 Jan 2022 09:45:42 -0800
From: Kristen Carlson Accardi <kristen@...ux.intel.com>
To: Dave Hansen <dave.hansen@...el.com>,
Haitao Huang <haitao.huang@...ux.intel.com>,
linux-sgx@...r.kernel.org, Jarkko Sakkinen <jarkko@...nel.org>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
x86@...nel.org, "H. Peter Anvin" <hpa@...or.com>
Cc: linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 1/2] x86/sgx: Add accounting for tracking overcommit
On Tue, 2022-01-11 at 09:39 -0800, Dave Hansen wrote:
> On 1/11/22 08:33, Haitao Huang wrote:
> > On Tue, 11 Jan 2022 09:43:35 -0600, Dave Hansen <
> > dave.hansen@...el.com>
> > wrote:
> > > On 1/11/22 06:20, Haitao Huang wrote:
> > > > If the system has a ton of RAM but limited EPC, I think it
> > > > makes
> > > > sense to allow more EPC swapping, can we do min(0.5*RAM,
> > > > 2*EPC)?
> > > > I suppose if the system is used for heavy enclave load, user
> > > > would be
> > > > willing to at least use half of RAM.
> > >
> > > If I have 100GB of RAM and 100MB of EPC, can I really
> > > *meaningfully*
> > > run 50GB of enclaves? In that case, if everything was swapped
> > > out
> > > evenly, I would only have a 499/500 chance that a given page
> > > reference
> > > would fault.
> >
> > The formula will cap swapping at 2*EPC so only 200MB swapped out.
> > So
> > the miss is at most 1/3.
> > The original hard coded cap 1.5*EPC may still consume too much RAM
> > if
> > RAM<1.5*EPC.
>
> Oh, sorry, I read that backwards.
>
> Basing it on the amount of RAM is a bit nasty. You might either
> really
> overly restrict the amount of allowed EPC, or you have to handle
> hotplug.
My opinion is that we should keep the current algorithm for now as it
is pretty straightforward, and cgroups will eventually allow for more
control.
Powered by blists - more mailing lists