[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240515131240.1304824-1-dmitrii.kuvaiskii@intel.com>
Date: Wed, 15 May 2024 06:12:38 -0700
From: Dmitrii Kuvaiskii <dmitrii.kuvaiskii@...el.com>
To: dave.hansen@...ux.intel.com,
jarkko@...nel.org,
kai.huang@...el.com,
haitao.huang@...ux.intel.com,
reinette.chatre@...el.com,
linux-sgx@...r.kernel.org,
linux-kernel@...r.kernel.org
Cc: mona.vij@...el.com,
kailun.qin@...el.com
Subject: [PATCH v2 0/2] x86/sgx: Fix two data races in EAUG/EREMOVE flows
SGX runtimes such as Gramine may implement EDMM-based lazy allocation of
enclave pages and may support MADV_DONTNEED semantics [1]. The former
implies #PF-based page allocation, and the latter implies the usage of
SGX_IOC_ENCLAVE_REMOVE_PAGES ioctl.
EDMM-based lazy allocation and MADV_DONTNEED semantics provide
significant performance improvement for some workloads that run on
Gramine. For example, a Java workload with a 16GB enclave size has
approx. 57x improvement in total runtime. Thus, we consider it important
to permit these optimizations in Gramine. However, we observed hangs of
applications (Node.js, PyTorch, R, iperf, Blender, Nginx) when run on
Gramine with EDMM, lazy allocation and MADV_DONTNEED features enabled.
We wrote a trivial stress test to reproduce the hangs observed in
real-world applications. The test stresses #PF-based page allocation and
SGX_IOC_ENCLAVE_REMOVE_PAGES flows in the SGX driver:
/* repeatedly touch different enclave pages at random and mix with
* madvise(MADV_DONTNEED) to stress EAUG/EREMOVE flows */
static void* thread_func(void* arg) {
size_t num_pages = 0xA000 / page_size;
for (int i = 0; i < 5000; i++) {
size_t page = get_random_ulong() % num_pages;
char data = READ_ONCE(((char*)arg)[page * page_size]);
page = get_random_ulong() % num_pages;
madvise(arg + page * page_size, page_size, MADV_DONTNEED);
}
}
addr = mmap(NULL, 0xA000, PROT_READ | PROT_WRITE, MAP_ANONYMOUS, -1, 0);
pthread_t threads[16];
for (int i = 0; i < 16; i++)
pthread_create(&threads[i], NULL, thread_func, addr);
This test uncovers two data races in the SGX driver. The remaining
patches describe and fix these races.
I performed several stress tests to verify that there are no other data
races (at least with the test program above):
- On Icelake server with 128GB of PRM, without madvise(). This stresses
the first data race. A Gramine SGX test suite running in the
background for additional stressing. Result: 1,000 runs without hangs
(result without the first bug fix: hangs every time).
- On Icelake server with 128GB of PRM, with madvise(). This stresses the
second data race. A Gramine SGX test suite running in the background
for additional stressing. Result: 1,000 runs without hangs (result
with the first bug fix but without the second bug fix: hangs approx.
once in 50 runs).
- On Icelake server with 4GB of PRM, with madvise(). This additionally
stresses the enclave page swapping flows. Two Gramine SGX test suites
running in the background for additional stressing of swapping (I
observe 100% CPU utilization from ksgxd which confirms that swapping
happens). Result: 1,000 runs without hangs.
[1] https://github.com/gramineproject/gramine/pull/1513
v1 -> v2:
- No changes in code itself
- Expanded cover letter
- Added CPU1 vs CPU2 race scenarios in commit messages
v1: https://lore.kernel.org/all/20240429104330.3636113-3-dmitrii.kuvaiskii@intel.com/
Dmitrii Kuvaiskii (2):
x86/sgx: Resolve EAUG race where losing thread returns SIGBUS
x86/sgx: Resolve EREMOVE page vs EAUG page data race
arch/x86/kernel/cpu/sgx/encl.c | 10 +++++++---
arch/x86/kernel/cpu/sgx/encl.h | 3 +++
arch/x86/kernel/cpu/sgx/ioctl.c | 1 +
3 files changed, 11 insertions(+), 3 deletions(-)
--
2.34.1
Powered by blists - more mailing lists