[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1623850736-389584-4-git-send-email-quic_c_gdjako@quicinc.com>
Date: Wed, 16 Jun 2021 06:38:44 -0700
From: Georgi Djakov <quic_c_gdjako@...cinc.com>
To: <will@...nel.org>, <robin.murphy@....com>
CC: <joro@...tes.org>, <isaacm@...eaurora.org>,
<baolu.lu@...ux.intel.com>, <pratikp@...eaurora.org>,
<iommu@...ts.linux-foundation.org>,
<linux-arm-kernel@...ts.infradead.org>,
<linux-kernel@...r.kernel.org>, <djakov@...nel.org>
Subject: [PATCH v7 03/15] iommu/io-pgtable: Introduce map_pages() as a page table op
From: "Isaac J. Manjarres" <isaacm@...eaurora.org>
Mapping memory into io-pgtables follows the same semantics
that unmapping memory used to follow (i.e. a buffer will be
mapped one page block per call to the io-pgtable code). This
means that it can be optimized in the same way that unmapping
memory was, so add a map_pages() callback to the io-pgtable
ops structure, so that a range of pages of the same size
can be mapped within the same call.
Signed-off-by: Isaac J. Manjarres <isaacm@...eaurora.org>
Suggested-by: Will Deacon <will@...nel.org>
Signed-off-by: Georgi Djakov <quic_c_gdjako@...cinc.com>
---
include/linux/io-pgtable.h | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/include/linux/io-pgtable.h b/include/linux/io-pgtable.h
index 9391c5fa71e6..c43f3b899d2a 100644
--- a/include/linux/io-pgtable.h
+++ b/include/linux/io-pgtable.h
@@ -143,6 +143,7 @@ struct io_pgtable_cfg {
* struct io_pgtable_ops - Page table manipulation API for IOMMU drivers.
*
* @map: Map a physically contiguous memory region.
+ * @map_pages: Map a physically contiguous range of pages of the same size.
* @unmap: Unmap a physically contiguous memory region.
* @unmap_pages: Unmap a range of virtually contiguous pages of the same size.
* @iova_to_phys: Translate iova to physical address.
@@ -153,6 +154,9 @@ struct io_pgtable_cfg {
struct io_pgtable_ops {
int (*map)(struct io_pgtable_ops *ops, unsigned long iova,
phys_addr_t paddr, size_t size, int prot, gfp_t gfp);
+ int (*map_pages)(struct io_pgtable_ops *ops, unsigned long iova,
+ phys_addr_t paddr, size_t pgsize, size_t pgcount,
+ int prot, gfp_t gfp, size_t *mapped);
size_t (*unmap)(struct io_pgtable_ops *ops, unsigned long iova,
size_t size, struct iommu_iotlb_gather *gather);
size_t (*unmap_pages)(struct io_pgtable_ops *ops, unsigned long iova,
Powered by blists - more mailing lists