[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5218408.5YRJXjS4BX@wuerfel>
Date: Mon, 12 May 2014 14:00:57 +0200
From: Arnd Bergmann <arnd@...db.de>
To: linux-arm-kernel@...ts.infradead.org
Cc: Pintu Kumar <pintu.k@...look.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linaro-mm-sig@...ts.linaro.org" <linaro-mm-sig@...ts.linaro.org>
Subject: Re: Questions regarding DMA buffer sharing using IOMMU
On Monday 12 May 2014 15:12:41 Pintu Kumar wrote:
> Hi,
> I have some queries regarding IOMMU and CMA buffer sharing.
> We have an embedded linux device (kernel 3.10, RAM: 256Mb) in
> which camera and codec supports IOMMU but the display does not support IOMMU.
> Thus for camera capture we are using iommu buffers using
> ION/DMABUF. But for all display rendering we are using CMA buffers.
> So, the question is how to achieve buffer sharing (zero-copy)
> between Camera and Display using only IOMMU?
> Currently we are achieving zero-copy using CMA. And we are
> exploring options to use IOMMU.
> Now we wanted to know which option is better? To use IOMMU or CMA?
> If anybody have come across these design please share your thoughts and results.
There is a slight performance overhead in using the IOMMU in general,
because the IOMMU has to fetch the page table entries from memory
at least some of the time.
If that overhead is within the constraints you have for transfers between
camera and codec, you are always better off using IOMMU since that
means you don't have to do memory migration.
Note however, that we don't have a way to describe IOMMU relations
to devices in DT, so whatever you come up with to do this will most
likely be incompatible with what we do in future kernel versions.
Arnd
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists