lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2937e443-e83f-a8c4-674e-33a49d8d4ba1@oracle.com>
Date:   Fri, 10 Feb 2017 11:17:36 -0500
From:   Boris Ostrovsky <boris.ostrovsky@...cle.com>
To:     Paul Durrant <paul.durrant@...rix.com>,
        xen-devel@...ts.xenproject.org, linux-kernel@...r.kernel.org
Cc:     Juergen Gross <jgross@...e.com>
Subject: Re: [PATCH v2 2/3] xen/privcmd: Add IOCTL_PRIVCMD_DM_OP

On 02/10/2017 09:24 AM, Paul Durrant wrote:
> +static long privcmd_ioctl_dm_op(void __user *udata)
> +{
> +	struct privcmd_dm_op kdata;
> +	struct privcmd_dm_op_buf *kbufs;
> +	unsigned int nr_pages = 0;
> +	struct page **pages = NULL;
> +	struct xen_dm_op_buf *xbufs = NULL;
> +	unsigned int i;
> +	long rc;
> +
> +	if (copy_from_user(&kdata, udata, sizeof(kdata)))
> +		return -EFAULT;
> +
> +	if (kdata.num == 0)
> +		return 0;
> +
> +	/*
> +	 * Set a tolerable upper limit on the number of buffers
> +	 * without being overly restrictive, since we can't easily
> +	 * predict what future dm_ops may require.
> +	 */

I think this deserves its own macro since it really has nothing to do
with page size, has it? Especially since you are referencing it again
below too.


> +	if (kdata.num * sizeof(*kbufs) > PAGE_SIZE)
> +		return -E2BIG;
> +
> +	kbufs = kcalloc(kdata.num, sizeof(*kbufs), GFP_KERNEL);
> +	if (!kbufs)
> +		return -ENOMEM;
> +
> +	if (copy_from_user(kbufs, kdata.ubufs,
> +			   sizeof(*kbufs) * kdata.num)) {
> +		rc = -EFAULT;
> +		goto out;
> +	}
> +
> +	for (i = 0; i < kdata.num; i++) {
> +		if (!access_ok(VERIFY_WRITE, kbufs[i].uptr,
> +			       kbufs[i].size)) {
> +			rc = -EFAULT;
> +			goto out;
> +		}
> +
> +		nr_pages += DIV_ROUND_UP(
> +			offset_in_page(kbufs[i].uptr) + kbufs[i].size,
> +			PAGE_SIZE);
> +	}
> +
> +	/*
> +	 * Again, set a tolerable upper limit on the number of pages
> +	 * needed to lock all the buffers without being overly
> +	 * restrictive, since we can't easily predict the size of
> +	 * buffers future dm_ops may use.
> +	 */

OTOH, these two cases describe different types of copying (the first one
is for buffer descriptors and the second is for buffers themselves). And
so should they be limited by the same value?

> +	if (nr_pages * sizeof(*pages) > PAGE_SIZE) {
> +		rc = -E2BIG;
> +		goto out;
> +	}
> +
> +	pages = kcalloc(nr_pages, sizeof(*pages), GFP_KERNEL);
> +	if (!pages) {
> +		rc = -ENOMEM;
> +		goto out;
> +	}
> +
> +	xbufs = kcalloc(kdata.num, sizeof(*xbufs), GFP_KERNEL);
> +	if (!xbufs) {
> +		rc = -ENOMEM;
> +		goto out;
> +	}
> +
> +	rc = lock_pages(kbufs, kdata.num, pages, nr_pages);


Aren't those buffers already locked (as Andrew mentioned)? They are
mmapped with MAP_LOCKED.

And I also wonder whether we need to take rlimit(RLIMIT_MEMLOCK) into
account.

-boris



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ