lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 23 Feb 2018 17:35:40 +0200
From:   Oleksandr Andrushchenko <Oleksandr_Andrushchenko@...m.com>
To:     Boris Ostrovsky <boris.ostrovsky@...cle.com>,
        Oleksandr Andrushchenko <andr2000@...il.com>,
        xen-devel@...ts.xenproject.org, linux-kernel@...r.kernel.org,
        dri-devel@...ts.freedesktop.org, airlied@...ux.ie,
        daniel.vetter@...el.com, seanpaul@...omium.org,
        gustavo@...ovan.org, jgross@...e.com, konrad.wilk@...cle.com
Subject: Re: [PATCH 8/9] drm/xen-front: Implement GEM operations

On 02/23/2018 05:26 PM, Boris Ostrovsky wrote:
> On 02/21/2018 03:03 AM, Oleksandr Andrushchenko wrote:
>> +static struct xen_gem_object *gem_create(struct drm_device *dev, size_t size)
>> +{
>> +	struct xen_drm_front_drm_info *drm_info = dev->dev_private;
>> +	struct xen_gem_object *xen_obj;
>> +	int ret;
>> +
>> +	size = round_up(size, PAGE_SIZE);
>> +	xen_obj = gem_create_obj(dev, size);
>> +	if (IS_ERR_OR_NULL(xen_obj))
>> +		return xen_obj;
>> +
>> +	if (drm_info->cfg->be_alloc) {
>> +		/*
>> +		 * backend will allocate space for this buffer, so
>> +		 * only allocate array of pointers to pages
>> +		 */
>> +		xen_obj->be_alloc = true;
> If be_alloc is a flag (which I am not sure about) --- should it be set
> to true *after* you've successfully allocated your things?
this is a configuration option telling about the way
the buffer gets allocated: either by the frontend or
backend (be_alloc -> buffer allocated by the backend)
>
>> +		ret = gem_alloc_pages_array(xen_obj, size);
>> +		if (ret < 0) {
>> +			gem_free_pages_array(xen_obj);
>> +			goto fail;
>> +		}
>> +
>> +		ret = alloc_xenballooned_pages(xen_obj->num_pages,
>> +				xen_obj->pages);
> Why are you allocating balloon pages?
in this use-case we map pages provided by the backend
(yes, I know this can be a problem from both security
POV and that DomU can die holding pages of Dom0 forever:
but still it is a configuration option, so user decides
if her use-case needs this and takes responsibility for
such a decision).

Please see description of the buffering modes in xen_drm_front.h
specifically for backend allocated buffers:
  *******************************************************************************
  * 2. Buffers allocated by the backend
  *******************************************************************************
  *
  * This mode of operation is run-time configured via guest domain 
configuration
  * through XenStore entries.
  *
  * For systems which do not provide IOMMU support, but having specific
  * requirements for display buffers it is possible to allocate such buffers
  * at backend side and share those with the frontend.
  * For example, if host domain is 1:1 mapped and has DRM/GPU hardware 
expecting
  * physically contiguous memory, this allows implementing zero-copying
  * use-cases.

>
> -boris
>
>> +		if (ret < 0) {
>> +			DRM_ERROR("Cannot allocate %zu ballooned pages: %d\n",
>> +					xen_obj->num_pages, ret);
>> +			goto fail;
>> +		}
>> +
>> +		return xen_obj;
>> +	}
>> +	/*
>> +	 * need to allocate backing pages now, so we can share those
>> +	 * with the backend
>> +	 */
>> +	xen_obj->num_pages = DIV_ROUND_UP(size, PAGE_SIZE);
>> +	xen_obj->pages = drm_gem_get_pages(&xen_obj->base);
>> +	if (IS_ERR_OR_NULL(xen_obj->pages)) {
>> +		ret = PTR_ERR(xen_obj->pages);
>> +		xen_obj->pages = NULL;
>> +		goto fail;
>> +	}
>> +
>> +	return xen_obj;
>> +
>> +fail:
>> +	DRM_ERROR("Failed to allocate buffer with size %zu\n", size);
>> +	return ERR_PTR(ret);
>> +}
>> +
>>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ