lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240301071918.64631-1-xuanzhuo@linux.alibaba.com>
Date: Fri,  1 Mar 2024 15:19:18 +0800
From: Xuan Zhuo <xuanzhuo@...ux.alibaba.com>
To: linux-kernel@...r.kernel.org
Cc: Robin Murphy <robin.murphy@....com>,
	Joerg Roedel <joro@...tes.org>,
	Will Deacon <will@...nel.org>,
	Christoph Hellwig <hch@....de>,
	Marek Szyprowski <m.szyprowski@...sung.com>,
	iommu@...ts.linux.dev,
	"Michael S. Tsirkin" <mst@...hat.com>,
	Zelin Deng <zelin.deng@...ux.alibaba.com>
Subject: [RFC] dma-mapping: introduce dma_can_skip_unmap()

In a typical workflow, we first perform a dma map on an address to
obtain a dma address, followed by a dma unmap on that address. Generally,
this process works without issues. However, under certain circumstances,
we require additional resources to manage these dma addresses. For
instance, in layered architectures, we pass the dma address to another
module, but retrieving it back from that module can present some
challenges. In such cases, we must allocate extra resources to manage
these dma addresses.

However, considering that many times the dma unmap operation is actually
a no-op, if we know in advance that unmap is not necessary, we can save
on these extra management overheads. Moreover, we can directly skip the
dma unmap operation. This would be advantageous.

This tries to resolve the problem of patchset:

 http://lore.kernel.org/all/20240225032330-mutt-send-email-mst@kernel.org

For a single packet, virtio-net may submit 1-19 dma addresses to virtio
core. If the virtio-net maintains the dma addresses will waste too much
memory when the unmap is not necessary. If the virtio-net retrieves the
dma addresses of the packet from the virtio core, we need to hold the 19
dma addresses by one call. And the net drivers maintain the dma is the
future. So we hope to check the unmap is necessary or not.

Co-developed-by: Zelin Deng <zelin.deng@...ux.alibaba.com>
Signed-off-by: Zelin Deng <zelin.deng@...ux.alibaba.com>
Signed-off-by: Xuan Zhuo <xuanzhuo@...ux.alibaba.com>
---
 drivers/iommu/dma-iommu.c   | 11 +++++++++++
 include/linux/dma-map-ops.h |  1 +
 include/linux/dma-mapping.h |  5 +++++
 kernel/dma/mapping.c        | 23 +++++++++++++++++++++++
 4 files changed, 40 insertions(+)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 50ccc4f1ef81..8c661a0e1111 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -1706,6 +1706,16 @@ static size_t iommu_dma_opt_mapping_size(void)
 	return iova_rcache_range();
 }
 
+static bool iommu_dma_opt_can_skip_unmap(struct device *dev)
+{
+	struct iommu_domain *domain = iommu_get_dma_domain(dev);
+
+	if (domain->type == IOMMU_DOMAIN_IDENTITY)
+		return true;
+	else
+		return false;
+}
+
 static const struct dma_map_ops iommu_dma_ops = {
 	.flags			= DMA_F_PCI_P2PDMA_SUPPORTED,
 	.alloc			= iommu_dma_alloc,
@@ -1728,6 +1738,7 @@ static const struct dma_map_ops iommu_dma_ops = {
 	.unmap_resource		= iommu_dma_unmap_resource,
 	.get_merge_boundary	= iommu_dma_get_merge_boundary,
 	.opt_mapping_size	= iommu_dma_opt_mapping_size,
+	.dma_can_skip_unmap	= iommu_dma_opt_can_skip_unmap,
 };
 
 /*
diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h
index 4abc60f04209..d508fa90bc06 100644
--- a/include/linux/dma-map-ops.h
+++ b/include/linux/dma-map-ops.h
@@ -83,6 +83,7 @@ struct dma_map_ops {
 	size_t (*max_mapping_size)(struct device *dev);
 	size_t (*opt_mapping_size)(void);
 	unsigned long (*get_merge_boundary)(struct device *dev);
+	bool (*dma_can_skip_unmap)(struct device *dev);
 };
 
 #ifdef CONFIG_DMA_OPS
diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
index 4a658de44ee9..af5d9275f8cc 100644
--- a/include/linux/dma-mapping.h
+++ b/include/linux/dma-mapping.h
@@ -140,6 +140,7 @@ int dma_mmap_attrs(struct device *dev, struct vm_area_struct *vma,
 		void *cpu_addr, dma_addr_t dma_addr, size_t size,
 		unsigned long attrs);
 bool dma_can_mmap(struct device *dev);
+bool dma_can_skip_unmap(struct device *dev);
 bool dma_pci_p2pdma_supported(struct device *dev);
 int dma_set_mask(struct device *dev, u64 mask);
 int dma_set_coherent_mask(struct device *dev, u64 mask);
@@ -249,6 +250,10 @@ static inline bool dma_can_mmap(struct device *dev)
 {
 	return false;
 }
+static inline bool dma_can_skip_unmap(struct device *dev)
+{
+	return false;
+}
 static inline bool dma_pci_p2pdma_supported(struct device *dev)
 {
 	return false;
diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c
index 58db8fd70471..99a81932820b 100644
--- a/kernel/dma/mapping.c
+++ b/kernel/dma/mapping.c
@@ -445,6 +445,29 @@ bool dma_can_mmap(struct device *dev)
 }
 EXPORT_SYMBOL_GPL(dma_can_mmap);
 
+/**
+ * dma_can_skip_unmap - check if unmap can be skipped
+ * @dev: device to check
+ *
+ * Returns %true if @dev supports direct map or dma_can_skip_unmap() return true.
+ */
+bool dma_can_skip_unmap(struct device *dev)
+{
+	const struct dma_map_ops *ops = get_dma_ops(dev);
+
+	if (is_swiotlb_force_bounce(dev))
+		return false;
+
+	if (dma_map_direct(dev, ops))
+		return true;
+
+	if (ops->dma_can_skip_unmap)
+		return ops->dma_can_skip_unmap(dev);
+
+	return false;
+}
+EXPORT_SYMBOL_GPL(dma_can_skip_unmap);
+
 /**
  * dma_mmap_attrs - map a coherent DMA allocation into user space
  * @dev: valid struct device pointer, or NULL for ISA and EISA-like devices
-- 
2.32.0.3.g01195cf9f


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ