lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d66b03aa-bb36-01b2-6b10-fdae3bcd4555@redhat.com>
Date:   Mon, 12 Feb 2018 12:39:33 -0800
From:   Laura Abbott <labbott@...hat.com>
To:     Liam Mark <lmark@...eaurora.org>,
        Sumit Semwal <sumit.semwal@...aro.org>
Cc:     Martijn Coenen <maco@...roid.com>, Todd Kjos <tkjos@...roid.com>,
        Arve Hjønnevåg <arve@...roid.com>,
        linux-kernel@...r.kernel.org, devel@...verdev.osuosl.org,
        linaro-mm-sig@...ts.linaro.org
Subject: Re: [PATCH] staging: android: ion: Restrict cache maintenance to dma
 mapped memory

On 02/09/2018 10:21 PM, Liam Mark wrote:
> The ION begin_cpu_access and end_cpu_access functions use the
> dma_sync_sg_for_cpu and dma_sync_sg_for_device APIs to perform cache
> maintenance.
> 
> Currently it is possible to apply cache maintenance, via the
> begin_cpu_access and end_cpu_access APIs, to ION buffers which are not
> dma mapped.
> 
> The dma sync sg APIs should not be called on sg lists which have not been
> dma mapped as this can result in cache maintenance being applied to the
> wrong address. If an sg list has not been dma mapped then its dma_address
> field has not been populated, some dma ops such as the swiotlb_dma_ops ops
> use the dma_address field to calculate the address onto which to apply
> cache maintenance.
> 
> Fix the ION begin_cpu_access and end_cpu_access functions to only apply
> cache maintenance to buffers which have been dma mapped.
> 
I think this looks okay. I was initially concerned about concurrency and
setting the dma_mapped flag but I think that should be handled by the
caller synchronizing map/unmap/cpu_access calls (we might need to re-evaluate
in the future)

I would like to hold on queuing this for just a little bit until I
finish working on the Ion unit test (doing this in the complete opposite
order of course). I'm assuming this passed your internal tests Liam?

Thanks,
Laura

> Fixes: 2a55e7b5e544 ("staging: android: ion: Call dma_map_sg for syncing and mapping")
> Signed-off-by: Liam Mark <lmark@...eaurora.org>
> ---
>   drivers/staging/android/ion/ion.c | 16 ++++++++++++----
>   1 file changed, 12 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/staging/android/ion/ion.c b/drivers/staging/android/ion/ion.c
> index f480885e346b..e5df5272823d 100644
> --- a/drivers/staging/android/ion/ion.c
> +++ b/drivers/staging/android/ion/ion.c
> @@ -214,6 +214,7 @@ struct ion_dma_buf_attachment {
>   	struct device *dev;caller 
>   	struct sg_table *table;
>   	struct list_head list;
> +	bool dma_mapped;
>   };
>   
>   static int ion_dma_buf_attach(struct dma_buf *dmabuf, struct device *dev,
> @@ -235,6 +236,7 @@ static int ion_dma_buf_attach(struct dma_buf *dmabuf, struct device *dev,
>   
>   	a->table = table;
>   	a->dev = dev;
> +	a->dma_mapped = false;
>   	INIT_LIST_HEAD(&a->list);
>   
>   	attachment->priv = a;
> @@ -272,6 +274,7 @@ static struct sg_table *ion_map_dma_buf(struct dma_buf_attachment *attachment,
>   			direction))
>   		return ERR_PTR(-ENOMEM);
>   
> +	a->dma_mapped = true;
>   	return table;
>   }
>   
> @@ -279,7 +282,10 @@ static void ion_unmap_dma_buf(struct dma_buf_attachment *attachment,
>   			      struct sg_table *table,
>   			      enum dma_data_direction direction)
>   {
> +	struct ion_dma_buf_attachment *a = attachment->priv;
> +
>   	dma_unmap_sg(attachment->dev, table->sgl, table->nents, direction);
> +	a->dma_mapped = false;
>   }
>   
>   static int ion_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
> @@ -345,8 +351,9 @@ static int ion_dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
>   
>   	mutex_lock(&buffer->lock);
>   	list_for_each_entry(a, &buffer->attachments, list) {
> -		dma_sync_sg_for_cpu(a->dev, a->table->sgl, a->table->nents,
> -				    direction);
> +		if (a->dma_mapped)
> +			dma_sync_sg_for_cpu(a->dev, a->table->sgl,
> +					    a->table->nents, direction);
>   	}
>   	mutex_unlock(&buffer->lock);
>   
> @@ -367,8 +374,9 @@ static int ion_dma_buf_end_cpu_access(struct dma_buf *dmabuf,
>   
>   	mutex_lock(&buffer->lock);
>   	list_for_each_entry(a, &buffer->attachments, list) {
> -		dma_sync_sg_for_device(a->dev, a->table->sgl, a->table->nents,
> -				       direction);
> +		if (a->dma_mapped)
> +			dma_sync_sg_for_device(a->dev, a->table->sgl,
> +					       a->table->nents, direction);
>   	}
>   	mutex_unlock(&buffer->lock);
>   
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ