[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200414141849.55654-1-orjan.eide@arm.com>
Date: Tue, 14 Apr 2020 16:18:47 +0200
From: Ørjan Eide <orjan.eide@....com>
To: unlisted-recipients:; (no To-header on input)
CC: <nd@....com>, <orjan.eide@....com>, <anders.pedersen@....com>,
<john.stultz@...aro.org>, Laura Abbott <labbott@...hat.com>,
Sumit Semwal <sumit.semwal@...aro.org>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Arve Hjønnevåg <arve@...roid.com>,
Todd Kjos <tkjos@...roid.com>,
Martijn Coenen <maco@...roid.com>,
Joel Fernandes <joel@...lfernandes.org>,
Christian Brauner <christian@...uner.io>,
"Daniel Vetter" <daniel.vetter@...ll.ch>,
"Darren Hart (VMware)" <dvhart@...radead.org>,
Lecopzer Chen <lecopzer.chen@...iatek.com>,
"Arnd Bergmann" <arnd@...db.de>, <devel@...verdev.osuosl.org>,
<dri-devel@...ts.freedesktop.org>,
<linaro-mm-sig@...ts.linaro.org>, <linux-kernel@...r.kernel.org>,
<linux-media@...r.kernel.org>
Subject: [PATCH] staging: android: ion: Skip sync if not mapped
Only sync the sg-list of an Ion dma-buf attachment when the attachment
is actually mapped on the device.
dma-bufs may be synced at any time. It can be reached from user space
via DMA_BUF_IOCTL_SYNC, so there are no guarantees from callers on when
syncs may be attempted, and dma_buf_end_cpu_access() and
dma_buf_begin_cpu_access() may not be paired.
Since the sg_list's dma_address isn't set up until the buffer is used
on the device, and dma_map_sg() is called on it, the dma_address will be
NULL if sync is attempted on the dma-buf before it's mapped on a device.
Before v5.0 (commit 55897af63091 ("dma-direct: merge swiotlb_dma_ops
into the dma_direct code")) this was a problem as the dma-api (at least
the swiotlb_dma_ops on arm64) would use the potentially invalid
dma_address. How that failed depended on how the device handled physical
address 0. If 0 was a valid address to physical ram, that page would get
flushed a lot, while the actual pages in the buffer would not get synced
correctly. While if 0 is an invalid physical address it may cause a
fault and trigger a crash.
In v5.0 this was incidentally fixed by commit 55897af63091 ("dma-direct:
merge swiotlb_dma_ops into the dma_direct code"), as this moved the
dma-api to use the page pointer in the sg_list, and (for Ion buffers at
least) this will always be valid if the sg_list exists at all.
But, this issue is re-introduced in v5.3 with
commit 449fa54d6815 ("dma-direct: correct the physical addr in
dma_direct_sync_sg_for_cpu/device") moves the dma-api back to the old
behaviour and picks the dma_address that may be invalid.
dma-buf core doesn't ensure that the buffer is mapped on the device, and
thus have a valid sg_list, before calling the exporter's
begin_cpu_access.
Signed-off-by: Ørjan Eide <orjan.eide@....com>
---
drivers/staging/android/ion/ion.c | 12 ++++++++++++
1 file changed, 12 insertions(+)
Resubmit without disclaimer, sorry about that.
This seems to be part of a bigger issue where dma-buf exporters assume
that their dma-buf begin_cpu_access and end_cpu_access callbacks have a
certain guaranteed behavior, which isn't ensured by dma-buf core.
This patch fixes this in ion only, but it also needs to be fixed for
other exporters, either handled like this in each exporter, or in
dma-buf core before calling into the exporters.
diff --git a/drivers/staging/android/ion/ion.c b/drivers/staging/android/ion/ion.c
index 38b51eace4f9..7b752ba0cb6d 100644
--- a/drivers/staging/android/ion/ion.c
+++ b/drivers/staging/android/ion/ion.c
@@ -173,6 +173,7 @@ struct ion_dma_buf_attachment {
struct device *dev;
struct sg_table *table;
struct list_head list;
+ bool mapped:1;
};
static int ion_dma_buf_attach(struct dma_buf *dmabuf,
@@ -195,6 +196,7 @@ static int ion_dma_buf_attach(struct dma_buf *dmabuf,
a->table = table;
a->dev = attachment->dev;
INIT_LIST_HEAD(&a->list);
+ a->mapped = false;
attachment->priv = a;
@@ -231,6 +233,8 @@ static struct sg_table *ion_map_dma_buf(struct dma_buf_attachment *attachment,
direction))
return ERR_PTR(-ENOMEM);
+ a->mapped = true;
+
return table;
}
@@ -238,6 +242,10 @@ static void ion_unmap_dma_buf(struct dma_buf_attachment *attachment,
struct sg_table *table,
enum dma_data_direction direction)
{
+ struct ion_dma_buf_attachment *a = attachment->priv;
+
+ a->mapped = false;
+
dma_unmap_sg(attachment->dev, table->sgl, table->nents, direction);
}
@@ -297,6 +305,8 @@ static int ion_dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
mutex_lock(&buffer->lock);
list_for_each_entry(a, &buffer->attachments, list) {
+ if (!a->mapped)
+ continue;
dma_sync_sg_for_cpu(a->dev, a->table->sgl, a->table->nents,
direction);
}
@@ -320,6 +330,8 @@ static int ion_dma_buf_end_cpu_access(struct dma_buf *dmabuf,
mutex_lock(&buffer->lock);
list_for_each_entry(a, &buffer->attachments, list) {
+ if (!a->mapped)
+ continue;
dma_sync_sg_for_device(a->dev, a->table->sgl, a->table->nents,
direction);
}
--
2.17.1
Powered by blists - more mailing lists