[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <50E51C08.1020603@nvidia.com>
Date: Thu, 3 Jan 2013 07:50:00 +0200
From: Terje Bergström <tbergstrom@...dia.com>
To: Mark Zhang <nvmarkzhang@...il.com>
CC: "thierry.reding@...onic-design.de" <thierry.reding@...onic-design.de>,
"airlied@...ux.ie" <airlied@...ux.ie>,
"dev@...xeye.de" <dev@...xeye.de>,
"dri-devel@...ts.freedesktop.org" <dri-devel@...ts.freedesktop.org>,
"linux-tegra@...r.kernel.org" <linux-tegra@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCHv4 0/8] Support for Tegra 2D hardware
On 03.01.2013 05:31, Mark Zhang wrote:
> Sorry I didn't get it. Yes, in current design, you can pin all mem
> handles in one time but I haven't found anything related with "locking
> only once per submit".
>
> My idea is:
> - remove "job->addr_phys"
> - assign "job->reloc_addr_phys" & "job->gather_addr_phys" separately
> - In "pin_job_mem", just call "host1x_memmgr_pin_array_ids" twice to
> fill the "reloc_addr_phy" & "gather_addr_phys".
>
> Anything I misunderstood?
The current design uses CMA, which makes pinning basically a no-op. When
we have IOMMU support, pinning requires calling into IOMMU. Updating
SMMU tables requires locking, and probably maintenance before SMMU code
also requires its own locked data tables. For example, preventing
duplicate pinning might require a global table of handles.
Putting all of the handles in one table allows doing duplicate detection
across cmdbuf and reloc tables. This allows pinning each buffer exactly
once, which reduces number of calls to IOMMU.
> "host1x_cma_pin_array_ids" doesn't return negative value right now, so
> maybe you need to take a look at it.
True, and also a consequence of using CMA: pinning can never fail. With
IOMMU, pinning can fail.
Terje
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists