lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1547836667-13695-3-git-send-email-lmark@codeaurora.org>
Date:   Fri, 18 Jan 2019 10:37:45 -0800
From:   Liam Mark <lmark@...eaurora.org>
To:     labbott@...hat.com, sumit.semwal@...aro.org
Cc:     arve@...roid.com, tkjos@...roid.com, maco@...roid.com,
        joel@...lfernandes.org, christian@...uner.io,
        devel@...verdev.osuosl.org, dri-devel@...ts.freedesktop.org,
        linaro-mm-sig@...ts.linaro.org, linux-kernel@...r.kernel.org,
        afd@...com, john.stultz@...aro.org,
        Liam Mark <lmark@...eaurora.org>
Subject: [PATCH 2/4] staging: android: ion: Restrict cache maintenance to dma mapped memory

The ION begin_cpu_access and end_cpu_access functions use the
dma_sync_sg_for_cpu and dma_sync_sg_for_device APIs to perform cache
maintenance.

Currently it is possible to apply cache maintenance, via the
begin_cpu_access and end_cpu_access APIs, to ION buffers which are not
dma mapped.

The dma sync sg APIs should not be called on sg lists which have not been
dma mapped as this can result in cache maintenance being applied to the
wrong address. If an sg list has not been dma mapped then its dma_address
field has not been populated, some dma ops such as the swiotlb_dma_ops ops
use the dma_address field to calculate the address onto which to apply
cache maintenance.

Also I don’t think we want CMOs to be applied to a buffer which is not
dma mapped as the memory should already be coherent for access from the
CPU. Any CMOs required for device access taken care of in the
dma_buf_map_attachment and dma_buf_unmap_attachment calls.
So really it only makes sense for begin_cpu_access and end_cpu_access to
apply CMOs if the buffer is dma mapped.

Fix the ION begin_cpu_access and end_cpu_access functions to only apply
cache maintenance to buffers which are dma mapped.

Fixes: 2a55e7b5e544 ("staging: android: ion: Call dma_map_sg for syncing and mapping")
Signed-off-by: Liam Mark <lmark@...eaurora.org>
---
 drivers/staging/android/ion/ion.c | 26 +++++++++++++++++++++-----
 1 file changed, 21 insertions(+), 5 deletions(-)

diff --git a/drivers/staging/android/ion/ion.c b/drivers/staging/android/ion/ion.c
index 6f5afab7c1a1..1fe633a7fdba 100644
--- a/drivers/staging/android/ion/ion.c
+++ b/drivers/staging/android/ion/ion.c
@@ -210,6 +210,7 @@ struct ion_dma_buf_attachment {
 	struct device *dev;
 	struct sg_table *table;
 	struct list_head list;
+	bool dma_mapped;
 };
 
 static int ion_dma_buf_attach(struct dma_buf *dmabuf,
@@ -231,6 +232,7 @@ static int ion_dma_buf_attach(struct dma_buf *dmabuf,
 
 	a->table = table;
 	a->dev = attachment->dev;
+	a->dma_mapped = false;
 	INIT_LIST_HEAD(&a->list);
 
 	attachment->priv = a;
@@ -261,12 +263,18 @@ static struct sg_table *ion_map_dma_buf(struct dma_buf_attachment *attachment,
 {
 	struct ion_dma_buf_attachment *a = attachment->priv;
 	struct sg_table *table;
+	struct ion_buffer *buffer = attachment->dmabuf->priv;
 
 	table = a->table;
 
+	mutex_lock(&buffer->lock);
 	if (!dma_map_sg(attachment->dev, table->sgl, table->nents,
-			direction))
+			direction)) {
+		mutex_unlock(&buffer->lock);
 		return ERR_PTR(-ENOMEM);
+	}
+	a->dma_mapped = true;
+	mutex_unlock(&buffer->lock);
 
 	return table;
 }
@@ -275,7 +283,13 @@ static void ion_unmap_dma_buf(struct dma_buf_attachment *attachment,
 			      struct sg_table *table,
 			      enum dma_data_direction direction)
 {
+	struct ion_dma_buf_attachment *a = attachment->priv;
+	struct ion_buffer *buffer = attachment->dmabuf->priv;
+
+	mutex_lock(&buffer->lock);
 	dma_unmap_sg(attachment->dev, table->sgl, table->nents, direction);
+	a->dma_mapped = false;
+	mutex_unlock(&buffer->lock);
 }
 
 static int ion_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
@@ -346,8 +360,9 @@ static int ion_dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
 
 	mutex_lock(&buffer->lock);
 	list_for_each_entry(a, &buffer->attachments, list) {
-		dma_sync_sg_for_cpu(a->dev, a->table->sgl, a->table->nents,
-				    direction);
+		if (a->dma_mapped)
+			dma_sync_sg_for_cpu(a->dev, a->table->sgl,
+					    a->table->nents, direction);
 	}
 
 unlock:
@@ -369,8 +384,9 @@ static int ion_dma_buf_end_cpu_access(struct dma_buf *dmabuf,
 
 	mutex_lock(&buffer->lock);
 	list_for_each_entry(a, &buffer->attachments, list) {
-		dma_sync_sg_for_device(a->dev, a->table->sgl, a->table->nents,
-				       direction);
+		if (a->dma_mapped)
+			dma_sync_sg_for_device(a->dev, a->table->sgl,
+					       a->table->nents, direction);
 	}
 	mutex_unlock(&buffer->lock);
 
-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, 
a Linux Foundation Collaborative Project

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ