[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210205170030.856723-1-ira.weiny@intel.com>
Date: Fri, 5 Feb 2021 09:00:30 -0800
From: ira.weiny@...el.com
To: Jarkko Sakkinen <jarkko@...nel.org>
Cc: Ira Weiny <ira.weiny@...el.com>,
Dave Hansen <dave.hansen@...el.com>,
Sean Christopherson <seanjc@...gle.com>,
Jethro Beekman <jethro@...tanix.com>,
linux-kernel@...r.kernel.org, linux-sgx@...r.kernel.org
Subject: [PATCH v4] x86: Remove unnecessary kmap() from sgx_ioc_enclave_init()
From: Ira Weiny <ira.weiny@...el.com>
kmap is inefficient and we are trying to reduce the usage in the kernel.
There is no readily apparent reason why initp_page needs to be allocated
and kmap'ed() but sigstruct needs to be page aligned and token
512 byte aligned.
kmalloc() can give us this alignment but we need to allocate PAGE_SIZE
bytes to do so. Rather than change this kmap() to kmap_local_page() use
kmalloc() instead.
Remove the alloc_page()/kmap() and replace with kmalloc(PAGE_SIZE, ...)
to get a page aligned kernel address to use.
In addition add a comment to document the alignment requirements so that
others like myself don't attempt to 'fix' this again.
Cc: Dave Hansen <dave.hansen@...el.com>
Cc: Sean Christopherson <seanjc@...gle.com>
Cc: Jethro Beekman <jethro@...tanix.com>
Signed-off-by: Ira Weiny <ira.weiny@...el.com>
---
Changes from v3[3]:
Remove BUILD_BUG_ONs
Changes from v2[2]:
When allocating a power of 2 size kmalloc() now guarantees the
alignment of the respective size. So go back to using kmalloc() but
with a PAGE_SIZE allocation to get the alignment. This also follows
the pattern in sgx_ioc_enclave_create()
Changes from v1[1]:
Use page_address() instead of kcmalloc() to ensure sigstruct is
page aligned
Use BUILD_BUG_ON to ensure token and sigstruct don't collide.
[1] https://lore.kernel.org/lkml/20210129001459.1538805-1-ira.weiny@intel.com/
[2] https://lore.kernel.org/lkml/20210202013725.3514671-1-ira.weiny@intel.com/
[3] https://lore.kernel.org/lkml/20210205050850.GC5033@iweiny-DESK2.sc.intel.com/#t
---
arch/x86/kernel/cpu/sgx/ioctl.c | 13 +++++++------
1 file changed, 7 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kernel/cpu/sgx/ioctl.c b/arch/x86/kernel/cpu/sgx/ioctl.c
index 90a5caf76939..38e540de5e2a 100644
--- a/arch/x86/kernel/cpu/sgx/ioctl.c
+++ b/arch/x86/kernel/cpu/sgx/ioctl.c
@@ -604,7 +604,6 @@ static long sgx_ioc_enclave_init(struct sgx_encl *encl, void __user *arg)
{
struct sgx_sigstruct *sigstruct;
struct sgx_enclave_init init_arg;
- struct page *initp_page;
void *token;
int ret;
@@ -615,11 +614,14 @@ static long sgx_ioc_enclave_init(struct sgx_encl *encl, void __user *arg)
if (copy_from_user(&init_arg, arg, sizeof(init_arg)))
return -EFAULT;
- initp_page = alloc_page(GFP_KERNEL);
- if (!initp_page)
+ /*
+ * sigstruct must be on a page boundry and token on a 512 byte boundry
+ * kmalloc() gives us this alignment when allocating PAGE_SIZE bytes
+ */
+ sigstruct = kmalloc(PAGE_SIZE, GFP_KERNEL);
+ if (!sigstruct)
return -ENOMEM;
- sigstruct = kmap(initp_page);
token = (void *)((unsigned long)sigstruct + PAGE_SIZE / 2);
memset(token, 0, SGX_LAUNCH_TOKEN_SIZE);
@@ -645,8 +647,7 @@ static long sgx_ioc_enclave_init(struct sgx_encl *encl, void __user *arg)
ret = sgx_encl_init(encl, sigstruct, token);
out:
- kunmap(initp_page);
- __free_page(initp_page);
+ kfree(sigstruct);
return ret;
}
--
2.28.0.rc0.12.gb6a658bd00c9
Powered by blists - more mailing lists