[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201117192912.GD10393@kernel.org>
Date: Tue, 17 Nov 2020 21:29:12 +0200
From: Jarkko Sakkinen <jarkko@...nel.org>
To: Dave Hansen <dave.hansen@...el.com>
Cc: akpm@...ux-foundation.org, andriy.shevchenko@...ux.intel.com,
asapek@...gle.com, bp@...en8.de, cedric.xing@...el.com,
chenalexchen@...gle.com, conradparker@...gle.com,
cyhanish@...gle.com, haitao.huang@...el.com, jethro@...tanix.com,
kai.huang@...el.com, kai.svahn@...el.com, kmoy@...gle.com,
linux-kernel@...r.kernel.org, linux-sgx@...r.kernel.org,
ludloff@...gle.com, luto@...nel.org, mikko.ylinen@...el.com,
nhorman@...hat.com, npmccallum@...hat.com, puiterwijk@...hat.com,
rientjes@...gle.com, sean.j.christopherson@...el.com,
serge.ayoun@...el.com, tglx@...utronix.de, x86@...nel.org,
yaozhangx@...gle.com, Dave Hansen <dave.hansen@...ux.intel.com>
Subject: Re: [PATCH] x86/sgx: clarify 'laundry_list' locking
On Mon, Nov 16, 2020 at 02:25:31PM -0800, Dave Hansen wrote:
> From: Dave Hansen <dave.hansen@...ux.intel.com>
>
> Short Version:
>
> The SGX section->laundry_list structure is effectively thread-local,
> but declared next to some shared structures. Its semantics are clear
> as mud. Fix that. No functional changes. Compile tested only.
>
> Long Version:
>
> The SGX hardware keeps per-page metadata. This can provide things like
> permissions, integrity and replay protection. It also prevents things
> like having an enclave page mapped multiple times or shared between
> enclaves.
>
> But, that presents a problem for kexec()'d kernels (or any other kernel
> that does not run immediately after a hardware reset). This is because
> the last kernel may have been rude and forgotten to reset pages, which
> would trigger the the "shared page" sanity check.
>
> To fix this, the SGX code "launders" the pages by running the EREMOVE
> instruction on all pages at boot. This is slow and can take a long
> time, so it is performed off in the SGX-specific ksgxd instead of being
> synchronous at boot. The init code hands the list of pages to launder
> in a per-SGX-section list: ->laundry_list. The only code to touch this
> list is the init code and ksgxd. This means that no locking is
> necessary for ->laundry_list.
>
> However, a lock is required for section->page_list, which is accessed
> while creating enclaves and by ksgxd. This lock (section->lock is
> acquired by ksgxd while also processing ->laundry_list. It is easy
> to confuse the purpose of the locking as being for ->laundry_list
> and ->page_list.
>
> Rename ->laundry_list to ->init_laundry_list to make it clear that
> this is not normally used at runtime. Also add some comments
> clarifying the locking, and reorganize 'sgx_epc_section' to put 'lock'
> near the things it protects.
>
> Note: init_laundry_list is 128 bytes of wasted space at runtime. It
> could theoretically be dynamically allocated and then freed after the
> laundering process. But, I suspect it would take nearly 128 bytes
> of extra instructions to do that.
>
> Cc: Jethro Beekman <jethro@...tanix.com>
> Cc: Serge Ayoun <serge.ayoun@...el.com>
> Cc: Jarkko Sakkinen <jarkko@...nel.org>
> Signed-off-by: Jarkko Sakkinen <jarkko@...nel.org>
> Signed-off-by: Dave Hansen <dave.hansen@...ux.intel.com>
Thansk, this does make sense to me. I'll squash it.
/Jarkko
> ---
>
> b/arch/x86/kernel/cpu/sgx/main.c | 14 ++++++++------
> b/arch/x86/kernel/cpu/sgx/sgx.h | 15 ++++++++++++---
> 2 files changed, 20 insertions(+), 9 deletions(-)
>
> diff -puN arch/x86/kernel/cpu/sgx/main.c~sgx-laundry-comments arch/x86/kernel/cpu/sgx/main.c
> --- a/arch/x86/kernel/cpu/sgx/main.c~sgx-laundry-comments 2020-11-16 13:55:42.624972349 -0800
> +++ b/arch/x86/kernel/cpu/sgx/main.c 2020-11-16 13:58:10.652971980 -0800
> @@ -36,13 +36,15 @@ static void sgx_sanitize_section(struct
> LIST_HEAD(dirty);
> int ret;
>
> - while (!list_empty(§ion->laundry_list)) {
> + /* init_laundry_list is thread-local, no need for a lock: */
> + while (!list_empty(§ion->init_laundry_list)) {
> if (kthread_should_stop())
> return;
>
> + /* needed for access to ->page_list: */
> spin_lock(§ion->lock);
>
> - page = list_first_entry(§ion->laundry_list,
> + page = list_first_entry(§ion->init_laundry_list,
> struct sgx_epc_page, list);
>
> ret = __eremove(sgx_get_epc_virt_addr(page));
> @@ -56,7 +58,7 @@ static void sgx_sanitize_section(struct
> cond_resched();
> }
>
> - list_splice(&dirty, §ion->laundry_list);
> + list_splice(&dirty, §ion->init_laundry_list);
> }
>
> static bool sgx_reclaimer_age(struct sgx_epc_page *epc_page)
> @@ -418,7 +420,7 @@ static int ksgxswapd(void *p)
> sgx_sanitize_section(&sgx_epc_sections[i]);
>
> /* Should never happen. */
> - if (!list_empty(&sgx_epc_sections[i].laundry_list))
> + if (!list_empty(&sgx_epc_sections[i].init_laundry_list))
> WARN(1, "EPC section %d has unsanitized pages.\n", i);
> }
>
> @@ -632,13 +634,13 @@ static bool __init sgx_setup_epc_section
> section->phys_addr = phys_addr;
> spin_lock_init(§ion->lock);
> INIT_LIST_HEAD(§ion->page_list);
> - INIT_LIST_HEAD(§ion->laundry_list);
> + INIT_LIST_HEAD(§ion->init_laundry_list);
>
> for (i = 0; i < nr_pages; i++) {
> section->pages[i].section = index;
> section->pages[i].flags = 0;
> section->pages[i].owner = NULL;
> - list_add_tail(§ion->pages[i].list, §ion->laundry_list);
> + list_add_tail(§ion->pages[i].list, §ion->init_laundry_list);
> }
>
> section->free_cnt = nr_pages;
> diff -puN arch/x86/kernel/cpu/sgx/sgx.h~sgx-laundry-comments arch/x86/kernel/cpu/sgx/sgx.h
> --- a/arch/x86/kernel/cpu/sgx/sgx.h~sgx-laundry-comments 2020-11-16 13:55:42.627972349 -0800
> +++ b/arch/x86/kernel/cpu/sgx/sgx.h 2020-11-16 13:55:42.631972349 -0800
> @@ -34,15 +34,24 @@ struct sgx_epc_page {
> * physical memory e.g. for memory areas of the each node. This structure is
> * used to store EPC pages for one EPC section and virtual memory area where
> * the pages have been mapped.
> + *
> + * 'lock' must be held before accessing 'page_list' or 'free_cnt'.
> */
> struct sgx_epc_section {
> unsigned long phys_addr;
> void *virt_addr;
> - struct list_head page_list;
> - struct list_head laundry_list;
> struct sgx_epc_page *pages;
> - unsigned long free_cnt;
> +
> spinlock_t lock;
> + struct list_head page_list;
> + unsigned long free_cnt;
> +
> + /*
> + * Pages which need EREMOVE run on them before they can be
> + * used. Only safe to be accessed in ksgxd and init code.
> + * Not protected by locks.
> + */
> + struct list_head init_laundry_list;
> };
>
> extern struct sgx_epc_section sgx_epc_sections[SGX_MAX_EPC_SECTIONS];
> _
>
Powered by blists - more mailing lists