[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <bc370c39-d8c1-9371-2345-cf255ced9a1b@intel.com>
Date: Tue, 11 Jan 2022 07:43:35 -0800
From: Dave Hansen <dave.hansen@...el.com>
To: Haitao Huang <haitao.huang@...ux.intel.com>,
linux-sgx@...r.kernel.org, Jarkko Sakkinen <jarkko@...nel.org>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
x86@...nel.org, "H. Peter Anvin" <hpa@...or.com>,
Kristen Carlson Accardi <kristen@...ux.intel.com>
Cc: linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 1/2] x86/sgx: Add accounting for tracking overcommit
On 1/11/22 06:20, Haitao Huang wrote:
> If the system has a ton of RAM but limited EPC, I think it makes sense
> to allow more EPC swapping, can we do min(0.5*RAM, 2*EPC)?
> I suppose if the system is used for heavy enclave load, user would be
> willing to at least use half of RAM.
If I have 100GB of RAM and 100MB of EPC, can I really *meaningfully* run
50GB of enclaves? In that case, if everything was swapped out evenly, I
would only have a 499/500 chance that a given page reference would fault.
This isn't about a "heavy enclave load". If there is *that* much
swapped-out enclave memory, will an enclave even make meaningful forward
progress?
Powered by blists - more mailing lists