lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 19 Oct 2020 13:21:09 -0700
From:   Dave Hansen <dave.hansen@...el.com>
To:     Jarkko Sakkinen <jarkko.sakkinen@...ux.intel.com>
Cc:     x86@...nel.org, linux-sgx@...r.kernel.org,
        linux-kernel@...r.kernel.org, Jethro Beekman <jethro@...tanix.com>,
        Haitao Huang <haitao.huang@...ux.intel.com>,
        Chunyang Hui <sanqian.hcy@...fin.com>,
        Jordan Hand <jorhand@...ux.microsoft.com>,
        Nathaniel McCallum <npmccallum@...hat.com>,
        Seth Moore <sethmo@...gle.com>,
        Darren Kenny <darren.kenny@...cle.com>,
        Sean Christopherson <sean.j.christopherson@...el.com>,
        Suresh Siddha <suresh.b.siddha@...el.com>,
        akpm@...ux-foundation.org, andriy.shevchenko@...ux.intel.com,
        asapek@...gle.com, bp@...en8.de, cedric.xing@...el.com,
        chenalexchen@...gle.com, conradparker@...gle.com,
        cyhanish@...gle.com, haitao.huang@...el.com, kai.huang@...el.com,
        kai.svahn@...el.com, kmoy@...gle.com, ludloff@...gle.com,
        luto@...nel.org, nhorman@...hat.com, puiterwijk@...hat.com,
        rientjes@...gle.com, tglx@...utronix.de, yaozhangx@...gle.com,
        mikko.ylinen@...el.com
Subject: Re: [PATCH v39 12/24] x86/sgx: Add SGX_IOC_ENCLAVE_CREATE

On 10/17/20 9:26 PM, Jarkko Sakkinen wrote:
...
>>> +static int sgx_validate_secs(const struct sgx_secs *secs)
>>> +{
>>
>> What's the overall point of this function?  Does it avoid a #GP from an
>> instruction later?
>>
>> Does all of the 'secs' content come from userspace?
> 
> Yes it does avoid #GP, and all the data comes from the user space.

Please comment the function to indicate this.

But, in general, why do we care to avoid a #GP?  Is it just because we
don't have infrastructure in-kernel to suppress the resulting panic()?

>>> +	u64 max_size = (secs->attributes & SGX_ATTR_MODE64BIT) ?
>>> +		       sgx_encl_size_max_64 : sgx_encl_size_max_32;
>>> +
>>> +	if (secs->size < (2 * PAGE_SIZE) || !is_power_of_2(secs->size))
>>> +		return -EINVAL;
>>> +
>>> +	if (secs->base & (secs->size - 1))
>>> +		return -EINVAL;
>>> +
>>> +	if (secs->miscselect & sgx_misc_reserved_mask ||
>>> +	    secs->attributes & sgx_attributes_reserved_mask ||
>>> +	    secs->xfrm & sgx_xfrm_reserved_mask)
>>> +		return -EINVAL;
>>> +
>>> +	if (secs->size > max_size)
>>> +		return -EINVAL;
>>> +
>>> +	if (!(secs->xfrm & XFEATURE_MASK_FP) ||
>>> +	    !(secs->xfrm & XFEATURE_MASK_SSE) ||
>>> +	    (((secs->xfrm >> XFEATURE_BNDREGS) & 1) != ((secs->xfrm >> XFEATURE_BNDCSR) & 1)))
>>> +		return -EINVAL;
>>> +
>>> +	if (!secs->ssa_frame_size)
>>> +		return -EINVAL;
>>> +
>>> +	if (sgx_calc_ssa_frame_size(secs->miscselect, secs->xfrm) > secs->ssa_frame_size)
>>> +		return -EINVAL;
>>> +
>>> +	if (memchr_inv(secs->reserved1, 0, sizeof(secs->reserved1)) ||
>>> +	    memchr_inv(secs->reserved2, 0, sizeof(secs->reserved2)) ||
>>> +	    memchr_inv(secs->reserved3, 0, sizeof(secs->reserved3)) ||
>>> +	    memchr_inv(secs->reserved4, 0, sizeof(secs->reserved4)))
>>> +		return -EINVAL;
>>> +
>>> +	return 0;
>>> +}
>>
>> I think it would be nice to at least have one comment per condition to
>> explain what's going on there.
...


>>> +static int sgx_encl_create(struct sgx_encl *encl, struct sgx_secs *secs)
>>> +{
>>> +	struct sgx_epc_page *secs_epc;
>>> +	struct sgx_pageinfo pginfo;
>>> +	struct sgx_secinfo secinfo;
>>> +	unsigned long encl_size;
>>> +	struct file *backing;
>>> +	long ret;
>>> +
>>> +	if (sgx_validate_secs(secs)) {
>>> +		pr_debug("invalid SECS\n");
>>> +		return -EINVAL;
>>> +	}
>>> +
>>> +	/* The extra page goes to SECS. */
>>> +	encl_size = secs->size + PAGE_SIZE;
>>> +
>>> +	backing = shmem_file_setup("SGX backing", encl_size + (encl_size >> 5),
>>> +				   VM_NORESERVE);
>>
>> What's the >>5 adjustment for?
> 
> The backing storage stores not only the swapped page but also
> Paging Crypto MetaData (PCMD) structure. It essentially contains
> a CPU encrypted MAC for a page.
> 
> The MAC is over page version and data. The version is stored in
> a EPC page called Version Array (VA) page.
> 
> Both of these are needed by ENCLS[ELDU].

	/*
	 * SGX backing storage needs to contain space for both the
	 * EPC data and some metadata called the Paging Crypto
	 * MetaData (PCMD).  The PCMD needs 128b of storage for each
	 * page.
 	 */

Also, the MAC is a fixed size, right?  Let's say that x86 got a larger
page size in the future.  Would this number be 128b or PAGE_SIZE/32?

If it's a fixed size, I'd write:

	size = encl_size;
	size += (encl_size / PAGE_SIZE) * SGX_PCPD_PER_PAGE;

If it really is 1/32nd, I'd write

	size += encl_size / SGX_PCPD_RATIO;

or something.

Either way, the >>5 is total magic and needs comments and fixing.

>>> +long sgx_ioctl(struct file *filep, unsigned int cmd, unsigned long arg)
>>> +{
>>> +	struct sgx_encl *encl = filep->private_data;
>>> +	int ret, encl_flags;
>>> +
>>> +	encl_flags = atomic_fetch_or(SGX_ENCL_IOCTL, &encl->flags);
>>> +	if (encl_flags & SGX_ENCL_IOCTL)
>>> +		return -EBUSY;
>>
>> Is the SGX_ENCL_IOCTL bit essentially just a lock to single-thread
>> ioctl()s?  Should we name it as such?
> 
> Yes. It makes the concurrency overally easier if we can assume that
> only a single ioctl is in progress. There is no good reason to do
> them in parallel.
> 
> E.g. when you add pages you want to do that serially because the
> order changes the MRENCLAVE.

There are also hardware concurrency requirements, right?  A bunch of the
SGX functions seem not not even support being called in parallel.

> So should I rename it as SGX_ENCL_IOCTL_LOCKED?

I'd rather not see hand-rolled locking primitives frankly.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ